Anita Mason has a contribution to the long running genre debate here at the Guardian entitled “Genre fiction radiates from a literary centre”. I think her attempt to constitute this supposed center self-deconstructs spectacularly, but in a manner that is instructive and worth teasing apart.
This metaphorical representation of the literary as the universal and indeterminate hub from which determinate “rule-governed” genres “radiate” does not cohere with her criteria of demarcation between the literary and the non-literary. On the one hand, the literary can be anything; is governed by no determinate rules. On the other, dense psychological characterization is necessary for the literary since, she argues, Brave New World, and Consider Phlebas fail the test of literariness due to their lack of this attribute.
Well, you can’t have it both ways. Despite Mason’s peremptory reading of The Drowned World, Ballard’s oeuvre is famously unconcerned with character and “plot”, such as it is, incidental to one of the most profoundly literary treatments of the condition of modernity in prose. Few modern novels present a more literary and unitary treatment of their subject than Crash, for example, where a brilliantly intricate chain of metaphors and symbols explore the contingency of desire in the face of technical change.
On these grounds we would also have to exclude postmodern fabulists and experimental writers such as Pynchon, Barthelme, Robb-Grillet and Christine Brooke-Rose. So Mason’s Ptolemaic rhetoric of centrality is just a blind for her anthropocentrism. The universe of literature, I hope, is post-Copernican and limitless.
Accelerationism combines a transhumanist techno-optimism with a Marxist analysis of the dynamic between the relations and forces of production. Its proponents argue that under capitalism, modern technology is constrained by myopic and socially destructive goals. They argue that rather than abandoning technological modernity for illusory homeostatic Eden we should exploit and ramp up its incendiary potential in order to escape from the gravity well of market dominated resource-allocation. Like posthumanism, however, Accelerationism comes in several flavours. Benjamin Noys (who coined the term) first identified Accelerationism as a kind of overkill politics invested in freeing the machinic unconscious described in the libidinal postructuralisms of Lyotard and Deleuze from the domestication of liberal subjectivity and market mechanisms. This itinerary reaches its apogee in the work of Nick Land who lent the project a cyberpunk veneer borrowed from the writings of William Gibson and Bruce Sterling.
Land’s Accelerationism aims at the extirpation of humanity in favour of an ”abstract planetary intelligence rapidly constructing itself from the bricolaged fragments of former civilisations” (Srnicek and Williams 2013). However, this mirror-shaded beta version has been remodelled and given a new emancipatory focus by writers such as Ray Brassier, Nick Srnicek and Alex Williams (Williams 2013). This “promethean” phase Accelerationism argues that technology should be reinstrumentalized towards a project of “maximal collective self-mastery”. Promethean Accelerationism certainly espouses the same tactic of exacerbating the disruptive effects of technology, but with the aim of cultivating a more autonomous universal subject. As Steven Shaviro points out in his excellent talk “An Introduction to Accelerationism”, this version replicates orthodox Marxism at the level of both strategy and intellectual justification. Its vision of a rationally-ordered collectivity mediated by advanced technology seems far closer to Marx’s ideas, say, than Adorno’s dismal negative dialectics or the reactionary identity politics that still animates multiculturalist thinking. If technological modernity is irreversible – short of a catastrophe that would render the whole programme moot – it may be the only prospectus that has a chance of working. As Shaviro points out, an incipient accelerationist logic is already at work among communities using free and open-source software like Pd, where R&D on code modules is distributed among skilled enthusiasts rather than professional software houses (Note, that a similar community flourishes around Pd’s fancier commercial cousin, MAX MSP – where supplementary external objects are written by users in C++, Java and Python).
This is a small but significant move away from manufacture dominated by market feedback. We are beginning see similar tendencies in the manufacture of durables and biotech. The era of downloadable things is upon us. In April 2013, a libertarian group calling themselves Defence Distributed announced that they would release the code for a gun, “the Liberator”, which can be assembled from layers of plastic in a 3 D printer (currently priced at around $ 8000). The group’s spokesman, Cody Wilson, anticipates an era in which search engines will provide components “for everything from prosthetic limbs to drugs and birth-control devices”.
However, the alarm that the Liberator created in global law-enforcement agencies exemplifies the first of two potential pitfalls for the promethean accelerationist itinerary. The first is that the democratization of technology – enabled by its easy iteration from context to context – does not seem liable to increase our capacity to control its flows and applications; quite the contrary, and this becomes significant when the iterated tech is not just an Max MSP external for randomizing arrays but an offensive weapon, an engineered virus or a powerful AI program. I’ve argued elsewhere that technology has no essence and no itinerary. In its modern form at least, it is counter-final. It is not in control, but it is not in anyone’s control either, and the developments that appear to make a techno-insurgency conceivable are liable to ramp up its counter-finality. This, note, is a structural feature deriving from the increasing iterability of technique in modernity, not from market conditions. There is no reason to think that these issues would not be confronted by a more just world in which resources were better directed to identifiable social goods.
A second issue is also identified in Shaviro’s follow up discussion over at The Pinocchio Theory: the posthuman. Using a science fiction allegory from a story by Paul De Filippo, Shaviro suggests that the posthuman could be a figure for a decentred, vital mobilization against capitalism: a line of flight which uses the technologies of capitalist domination to develop new forms of association, embodiment and life. I think this prospectus is inspiring, but it also has moral dangers that Darian Meacham identifies in a paper forthcoming in The Journal of Medicine and Philosophy entitled ’Empathy and Alteration: The Ethical Relevance of the Phenomenological Species Concept’. Very briefly, Meacham argues that the development of technologically altered descendants of current humans might precipitate what I term a “disconnection” – the point at which some part of the human socio-technical system spins off to develop separately (Roden 2012). I’ve argued that disconnection is multiply realizable – or so far as we can tell. But Meacham suggests that a kind of disconnection could result if human descendants were to become sufficiently alien from us that “we” would no longer have a pre-reflective basis for empathy with them. We would no longer experience them as having our relation to the world or our intentions. Such a “phenomenological speciation” might fragment the notional universality of the human, leading to a multiverse of fissiparous and alienated clades like that envisaged in Bruce Sterling’s novel Schismatrix. A still more radical disconnection might result if super-intelligent AI’s went “feral”. At this point, the subject of history itself becomes divided. It is no longer just about us. Perhaps Land remains the most acute and intellectually consistent accelerationist after all.
Roden, David 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Ammon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart. Springer Frontiers Collection.
Srnicek, N.and Williams A (2013), #ACCELERATE MANIFESTO for an Accelerationist Politics, http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/
Sterling, Bruce. 1996. Schismatrix Plus. Ace Books.
Williams, Alex, 2013. “Escape Velocities.” E-flux (46). Accessed July 11. http://worker01.e-flux.com/pdf/article_8969785.pdf.
Hadley Freeman has an engaging interview with Terminator and Avatar director James Cameron on the excellent Guardian web site.
Cameron is underrated by people who think aesthetically realized movies must be all tight-lipped introspection or (worse) draw on prestigious literary sources. Terminator, T2 and Aliens, though, are the epitome of the smart techno-thriller.
There’s a wonderful montage sequence in T2 where the story of the attempt to prevent the Cyberdyne Corporation from inventing the Skynet computer which will unleash nuclear war on humanity is suspended. We see only an advancing highway at night while the disembodied voice of Sarah Connors (aka Linda Hamilton) says “The future always so clear to me had become like a black highway at night. We were in uncharted territory now, making up history as we went along”. It’s juxtaposition worthy of Godard or Renais. And as Freeman reminds us, Cameron’s also written some of the best female parts in recent cinema.
This is part of an ongoing project whose modest goal is to nuance the understanding of humanism in posthumanist philosophy and criticism. Cartesian dualism is one of the main targets of these critiques, but differences between dualisms tend to become obscured in the rush to lay the liberal humanist subject. Here, I’m using Mike Wheeler’s discussion of the Cartesian roots of mainstream cognitive science in Reconstructing the Cognitive World to distinguish between substance dualism (which nobody seems to believe) and explanatory dualism – arguably the orthodoxy in philosophy of psychology, from Kant to Fodor and beyond.
In a much-discussed passage from the fifth part of The Discourse on Method Descartes supports his dualist metaphysics with what we might call the “argument from the impossibility of artificial intelligence”. He claims that no machine (i.e. a biomechanical system such as an animal body) could act in the flexible, adaptable and generally rational way that ordinary humans do:
[We] may easily conceive a machine to be so constructed that it emits vocables, and even that it emits some correspondent to the action upon it of external objects which cause a change in its organs; for example, if touched in a particular place it may demand what we wish to say to it; if in another it may cry out that it is hurt, and such like; but not that it should arrange them variously so as appositely to reply to what is said in its presence, as men of the lowest grade of intellect can do. The second test is, that although such machines might execute many things with equal or perhaps greater perfection than any of us, they would, without doubt, fail in certain others from which it could be discovered that they did not act from knowledge, but solely from the disposition of their organs: for while reason is an universal instrument that is alike available on every occasion, these organs, on the contrary, need a particular arrangement for each particular action; whence it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act. (Descartes 1985, 44).
To ape rationality a mechanical system would need to integrate special purpose mechanisms suited to every occasion (a “look up table” in computational terms – see Wheeler 2005, 34). Since these occasions could ramify to infinity a mechanical system that could generate reliably appropriate behaviour would require a practically infinite number of parts.
In “Theorizing Posthumanism” Neil Badmington reads this as an attempt to police an ontological blockade between the human being and the nonhuman animal. However, in a textbook deconstructive move he claims that its formulation undermines its declared intent. For by identifying the capacity for reasoning with the functional capacity for context sensitive performance, Descartes allows for the conceptual possibility of a machine so complex that it would have an arrangement for every possible occasion – in effect, running an infinite look-up table.
Descartes’ anthropocentrism is thus less hygienic and secure than it might appear, for it implies that a material system with the complexity to generate flexible performances would be functionally rational. “Reason” Badmington writes “no longer capable of “distinguish[ing] us from the beasts, would meet its match, its fatal and flawless double.” (Badmington 2003, 18). He then springs his coup de théâtre:
On closer inspection, in other words, there lies within Descartes’s ontological hygiene a real sense in which, to take a line from one of Philip K. Dick’s novels, “[l]iving and unliving things are exchanging properties” (1996, 223; emphasis in original). Between the lines of the text, the lines of humanism cross themselves (out), and the moment at which humanism insists becomes the moment at which it nonetheless desists. Quite against his will, quite against all odds, Descartes has begun to resemble Deckard, the troubled protagonist of Do Androids Dream of Electric Sheep? … and Blade Runner … , who utterly fails to police the boundary between the real and the fake (Ibid.).
Casting Harrison Ford as Descartes is nice, but too quick. While Descartes may have had reasons to observe ontological hygiene precautions, in this passage, at least, he is attempting to motivate belief in a disembodied mind through inference to the best explanation. It is the manifest functional differences between humans and brutes that institutes the quarantine line, not the dualist ontology of extended matter and non-extended mind (Ryle’s “ghost in the machine”).
Descartes makes an empirical assumption about the limits of mechanical complexity in this passage (not an a priori claim about in-principle-complexity). If we think this holds, we should infer that our synapses are haunted. If we don’t, we shouldn’t. Badmington’s deconstruction requires Descartes be saddled with the assumption that rationality is a matter of appropriate functioning rather than spooked synapses. But in that case, there is a mark of the human (or at least of the rational intellect): namely, the capacity to function like humans do.
Thus we may easily ditch substance dualism while holding that an exhaustive list of the arrangements of matter responsible for each token of appropriate behaviour could not explain the capacity for flexible and rational behaviour. For one, any such account would be too vast to afford descriptive economy. More importantly, it would miss the abstract functional facts that, by hypothesis, distinguish us from the brutes. Thus, as Wheeler argues in Reconstructing the Cognitive World: The Next Step, the rejection of substance dualism is compatible with explanatory dualism: a scientific methodology that turns away from neuroscience and biomechanics to consider the inferential processes or practices which produce intelligent, flexible and adaptable behaviour. According to this view: “flexible, intelligent action remains conceptually and theoretically independent of the scientific understanding of the agent’s physical embodiment” (Wheeler 2005, 51).
Badmington, Neil (2003), “Theorizing Posthumanism”, Cultural Critique 53, 10-27,
Descartes (1985) A Discourse on Method. Trans. John Veitch. London: Everyman.
Wheeler, Michael (2005) Reconstructing the Cognitive World: the Next Step. MIT Press, 2005.
Gareth Powell mentioned that he’s moderating a book club discussion of Samuel R. Delaney’s brilliant space opera, Nova over at SFX Magazine and asked me for my take. I’m grading essays just now but wasn’t sufficiently wise to resist, so here it is:
Samuel Delany’s writing has always fascinated me. On the one hand he’s an extraordinarily sensual writer. His futures jangle your nerves. In the ‘Star Pit’ you feel his narrator’s dejection at being confined within a mere galaxy – a galaxy!
But Delaney has also retooled SF to explore identity, language and social ontology. In Stars in My Pocket Like Grains of Sand he portrays an interstellar civilization in which gender is differently coded from ours by shifting the functions of the masculine and feminine pronouns. In this post-gender world everyone is ‘she’ or ‘a woman’ regardless of sex. ‘He’ is reserved for any human/alien object of sexual desire. Reading Stars helps you think of gender as a mutable cultural virus rather than as destiny or “nature” . Delany thus re-engineers theories about the way language mediates thought current in Critical Theory and Poststructuralism and bodies them in an alien flesh we can regard as our own. With his path breaking exploration of queer identity this suffices to make him one of the most important (and underrated) political writers of our time.
It’s been a long time since my first adolescent reading of Nova. I remember being utterly seduced by the sensory detail and complexity of its star-faring future. I didn’t get the sophisticated games with language then, but the colour and difficulty of his world was unlike anything in the ascetic utopias of Asimov or Clark. Also Delany’s work had none of the hideous Oxbridge-Male ennui that spoilt even the greatest of British New Wavers. It was hard SF with Starships, alien skies and cyborgs reconstructed for a poly-sexual heterotopia. Without Delaney, there’d be no Gibson, Sterling and no Iain M. Banks. He could be more important than all of those figures.
Alan Moore’s graphic novel Watchmen (filmed back in 2009 by Zack Snyder) is an anti- superhero tale about super anti-heroes. Some of these ‘costumed adventurers’ are obsessives driven by the thrill of dressing up and breaking heads; others are co-opted by political interests or have shadowy agendas of their own. The Watchman known as ‘The Comedian’ is an amoral killer on a fat CIA remittance. The only one with actual superpowers, the glowing blue god Dr Manhattan, casually maintains US nuclear Hegemony, but sees humanity as a lower order of being than the inert desert of Mars.
Watchmen honours superhero tradition by sheathing these vigilantes in improbable tights and by culminating in a desperate battle to prevent a maniac killing lots of Americans. Here, though, the balletic combat is futile. As snippets of broadcast TV testify, the Americans are long dead before the first blow lands, and the architect of the plan, Ozymandias – AKA the ‘Worlds Smartest Man’ – is just a Watchman with a self-prescribed remit to usher in an era of global peace.
Ozymandias informs his fellow Watchmen that he has saved the world from nuclear annihilation by gulling the US and USSR into uniting against an illusory alien menace (the story is set in an alternative 1980′s during Nixon’s third term). To simulate this threat convincingly, though, he has had to kill half the population of New York with a vile artificial life form.
Ozymandias seems like an obvious candidate for villain (This is a comic book after all). Yet whether this is so, turns on the solution to the classic philosophical problem of ‘dirty hands’.
Ozymandias’ provides a consequentialist argument for his actions. Pure consequentialists believe that actions must be judged according to the value of their outcomes. Thus if the murder of a million New Yorkers is preferable to the death of billions in a nuclear war, it is better to murder a million New Yorkers.
Once they learn that nothing can prevent the deaths, all but one of the Watchmen agree they are ‘morally checkmated’.
Only Rorschach – so-called for the mutating inkblot concealing his face – contests this. He holds that some actions are intrinsically wrong and must be condemned irrespective of any beneficial outcomes they may produce (the philosophical term for this position is deontological ethics).
Let’s assume for that Ozymandias is factually correct in believing that humanity would have been destroyed had he not acted. This is the kind of thing we might expect the World’s Smartest Man to know. But Ozymandias has committed murder on the scale of a Hitler or a Pol Pot. Surely, his actions are wrong, no matter what?
So is Rorschach right?
Well, if he is, then Ozymandias should be killed or punished and the plot revealed. But Rorschach’s insistence seems wrong-headed. As Dr Manhattan points out ‘Exposing this plot, we destroy any chance of peace, dooming the Earth to worse destruction’.
Moore reinforces this impression by portraying Rorschach as a moral fanatic obsessed with punishment for its own sake. Ozymandias, by contrast, appears reasonable and genuinely pained by the deaths he has caused. So we seem confronted with four alternatives.
Ozymandias is right to sacrifice millions to save billions.
Or Rorschach is right.
Or they are both wrong.
Or they are both right.
The last possibility can be discounted if their positions are genuine contraries. However, there is another way of interpreting these moral claims. In his famous work, The Prince, Nicolo Machiavelli argued that canons of moral right have little place in politics. When deciding the future of a state (or a planet), we should be prepared to commit evil acts to secure the paramount goal of political order.
Machiavelli’s position is a kind of consequentialism. However, he does not claim that conventionally evil acts cease to be bad when performed for worthy political ends. If judged according to the principles of public morality they are necessary and bear testimony to the prowess of a Prince (or a Superhero). But they’re still wrong according to the standards of personal morality.
Thus, adopting Machiavelli’s position, we can regard Ozymandias as having performed a very ‘dirty’ but necessary act. Both his position and Rorschach’s can then be affirmed on different grounds. Is this a satisfactory resolution of the dilemma? One could object that any claim that an act is politically necessary must involve an appeal to moral grounds if it is not to be merely cynical – and Ozymandias, unlike the Comedian, is no cynic. Thus it remains troublingly uncertain whether this anti-superhero tale contains a genuine super-villain.
© The Open University 2011, all rights reserved
More Open University Philosophy @
References: Alan Moore and Dave Gibbon, Watchmen, New York: DC Comics, 1987. Zack Snyder, Watchmen (2009).
Damien Walter’s guardian blog has a column on a Dickian alternative history yarn about Bin Laden by the Israeli fabulist Lavie Tidhar and uses him as the exemplar of an oppositional SF that avoids the regressive tropes of some contemporary futurism. Walter’s criticisms of militarist and post-scarcity SF are fair and I’m very grateful for the introduction to Tidhar’s work. That said, I think his post sells the political possibilities of SF a little short. Science Fiction can do more than reaffirm the humanist platitude that ‘that behind every manufactured enemy, is a real human being’. We can leave such bromide to the Ian McEwan’s of this world!
Great SF does not remind us of our (questionable) humanity but reaffirms our materiality. Writers like Samuel Delany, Bruce Sterling, Gwyneth Jones and China Mieville have created fictional blueprints for an unlikely politics that, like the thaumaturgic railway engineers of Mieville’s Iron Council, experiments with bodies, spaces, technologies and speeds.