“The Sobornost Station is large enough to have its own weather. The ghost-rain inside does not so much fall but shimmers in the air. It makes shapes and moves, and gives Tawaddud the constant feeling that something is lurking just at the edge of her vision.
She looks up, and immediately regrets it. Through the wet veil, it is like looking down from the top of the Gomelez shard. The vertical lines far above pull her gaze towards an amber-hued, faintly glowing dome almost a kilometer high, made of transparent, undulating surfaces that bunch together towards the centre, like the ceiling of a circus tent, segmented by the sharply curving ribs of the Station’s supporting frame.
Forms like misshapen balloons float beneath the vault. At first they look random, but as Tawaddud watches, they coalesce into shapes: the line of a cheekbone and a chin and an eyebrow. Then they are faces, sculpted from air and light, looking down on her with hollow eyes.”
(Rajaniemi 2012, 82)
Rajaniemi, Hannu (2012). The Fractal Prince. St Ives: Gollanz.
According to the Disconnection Thesis (Roden 2012; 2014: Chapter 5) a posthuman is an agent descended from some part of the human socio-technical system that has “gone feral”. In its ancestral form, it may have served human ends, or have been narrowly human itself, but (post-disconnection) has accrued values and roles elsewhere.
To date there are no posthumans so we can only guess at their likely powers. But it seems safe to assume that anything capable of cutting out of the human system would need to be at least as flexible and adaptable as humans are themselves.
These powerful entities might be indifferent to humans, but they may not like us at all; or like us in ways we would not like to be liked. They may view us as a threat, or they may be immensely powerful sadists who devote some part of their technological prowess to killing and torturing us. If posthumans are conceivable, so are very bad posthumans.
So can we do some contingency planning to ensure against the emergence of posthuman dark lords? To do this we would need some handle on the kind of current technologies that might induce a dark lord disconnection (DLD). But what kinds of technologies could these be?
It might seem that some technological possibilities can be discerned a priori – by consulting reliable conceptual “intuitions” about the extendible powers of current technologies. For example, a being like Skynet – the genocidal military computer in James Cameron’s Terminator films – seems a plausible occupant of a posthuman timeline; whereas Sauron, the supernatural dark lord of Tolkien’s Lord of the Rings, does not. However, since the work of Saul Kripke in the 1970’s many philosophers have come to accept that there are a posteriori natural possibilities and necessities that are only discoverable empirically. That light has a maximum velocity from any reference frame upsets common sense intuitions about relative motion and could not have been discovered by reflecting on pre-relativistic concepts of light.
Claims about hypothetical technological possibility may be as vulnerable to refutation as naive physics. States like the US and China employ computers to co-ordinate military activities so a Skynet seems the more plausible posthuman antagonist. But the fact that there are computers but no supernatural dark lords does not entail that their capacities could be extended in any way we imagine. Light bulbs exist as well as computers, but maybe a Skynet is no more technologically possible than Byron the Intelligent light bulb in Thomas Pynchon’s fabulist novel Gravity’s Rainbow.
So here’s a thing. Posthuman Possibility Space (the set of technically possible routes to disconnection) may contain a Dark Lord Possibility Sub-Space – the trajectories all of which lead to a DLD! We may not have any reliable indication of what (if anything) belongs to it. But, quite possibly, it is out there, waiting.
Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientifc and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.
Anita Mason has a contribution to the long running genre debate here at the Guardian entitled “Genre fiction radiates from a literary centre”. I think her attempt to constitute this supposed center self-deconstructs spectacularly, but in a manner that is instructive and worth teasing apart.
This metaphorical representation of the literary as the universal and indeterminate hub from which determinate “rule-governed” genres “radiate” does not cohere with her criteria of demarcation between the literary and the non-literary. On the one hand, the literary can be anything; is governed by no determinate rules. On the other, dense psychological characterization is necessary for the literary since, she argues, Brave New World, and Consider Phlebas fail the test of literariness due to their lack of this attribute.
Well, you can’t have it both ways. Despite Mason’s peremptory reading of The Drowned World, Ballard’s oeuvre is famously unconcerned with character and “plot”, such as it is, incidental to one of the most profoundly literary treatments of the condition of modernity in prose. Few modern novels present a more literary and unitary treatment of their subject than Crash, for example, where a brilliantly intricate chain of metaphors and symbols explore the contingency of desire in the face of technical change.
On these grounds we would also have to exclude postmodern fabulists and experimental writers such as Pynchon, Barthelme, Robb-Grillet and Christine Brooke-Rose. So Mason’s Ptolemaic rhetoric of centrality is just a blind for her anthropocentrism. The universe of literature, I hope, is post-Copernican and limitless.
Accelerationism combines a transhumanist techno-optimism with a Marxist analysis of the dynamic between the relations and forces of production. Its proponents argue that under capitalism, modern technology is constrained by myopic and socially destructive goals. They argue that rather than abandoning technological modernity for illusory homeostatic Eden we should exploit and ramp up its incendiary potential in order to escape from the gravity well of market dominated resource-allocation. Like posthumanism, however, Accelerationism comes in several flavours. Benjamin Noys (who coined the term) first identified Accelerationism as a kind of overkill politics invested in freeing the machinic unconscious described in the libidinal postructuralisms of Lyotard and Deleuze from the domestication of liberal subjectivity and market mechanisms. This itinerary reaches its apogee in the work of Nick Land who lent the project a cyberpunk veneer borrowed from the writings of William Gibson and Bruce Sterling.
Land’s Accelerationism aims at the extirpation of humanity in favour of an “abstract planetary intelligence rapidly constructing itself from the bricolaged fragments of former civilisations” (Srnicek and Williams 2013).
However, this mirror-shaded beta version has been remodelled and given a new emancipatory focus by writers such as Ray Brassier, Nick Srnicek and Alex Williams (Williams 2013). This “promethean” phase Accelerationism argues that technology should be reinstrumentalized towards a project of “maximal collective self-mastery”.
Promethean Accelerationism certainly espouses the same tactic of exacerbating the disruptive effects of technology, but with the aim of cultivating a more autonomous collective subject. As Steven Shaviro points out in his excellent talk “An Introduction to Accelerationism”, this version replicates orthodox Marxism at the level of both strategy and intellectual justification. Its vision of a rationally-ordered collectivity mediated by advanced technology seems far closer to Marx’s ideas, say, than Adorno’s dismal negative dialectics or the reactionary identity politics that still animates multiculturalist thinking. If technological modernity is irreversible – short of a catastrophe that would render the whole programme moot – it may be the only prospectus that has a chance of working. As Shaviro points out, an incipient accelerationist logic is already at work among communities using free and open-source software like Pd, where R&D on code modules is distributed among skilled enthusiasts rather than professional software houses (Note, that a similar community flourishes around Pd’s fancier commercial cousin, MAX MSP – where supplementary external objects are written by users in C++, Java and Python).
This is a small but significant move away from manufacture dominated by market feedback. We are beginning see similar tendencies in the manufacture of durables and biotech. The era of downloadable things is upon us. In April 2013, a libertarian group calling themselves Defence Distributed announced that they would release the code for “the Liberator”, a gun that can be assembled from layers of plastic in a 3 D printer (currently priced at around $ 8000). The group’s spokesman, Cody Wilson, anticipates an era in which search engines will provide components “for everything from prosthetic limbs to drugs and birth-control devices”.
However, the alarm that the Liberator created in global law-enforcement agencies exemplifies the first of two potential pitfalls for the Promethean accelerationist itinerary. The democratization of technology – enabled by its easy iteration from context to context – does not seem liable to increase our capacity to control its flows and applications; quite the contrary, and this becomes significant when the iterated tech is not just an Max MSP external for randomizing arrays but an offensive weapon, an engineered virus or a powerful AI program.
I’ve argued elsewhere that technology has no essence and no itinerary. In its modern form at least, it is counter-final. It is not in control, but it is not in anyone’s control either, and the developments that appear to make a techno-insurgency conceivable are liable to ramp up its counter-finality. This, note, is a structural feature deriving from the increasing mobility of technique in modernity, not from market conditions. There is no reason to think that these issues would not be confronted by a more just world in which resources were better directed to identifiable social goods.
A second issue is also identified in Shaviro’s follow up discussion over at The Pinocchio Theory: the posthuman. Using a science fiction allegory from a story by Paul De Filippo, Shaviro suggests that the posthuman could be a figure for a decentred, vital mobilization against capitalism: a line of flight which uses the technologies of capitalist domination to develop new forms of association, embodiment and life.
I think this prospectus is inspiring, but it also has moral dangers that Darian Meacham identifies in a paper forthcoming in The Journal of Medicine and Philosophy entitled ‘Empathy and Alteration: The Ethical Relevance of the Phenomenological Species Concept’. Very briefly, Meacham argues that the development of technologically altered descendants of current humans might precipitate what I term a “disconnection” – the point at which some part of the human socio-technical system spins off to develop separately (Roden 2012). I’ve argued that disconnection is multiply realizable – or so far as we can tell. But Meacham suggests that a kind of disconnection could result if human descendants were to become sufficiently alien from us that “we” would no longer have a pre-reflective basis for empathy with them. We would no longer experience them as having our relation to the world or our intentions. Such a “phenomenological speciation” might fragment the notional universality of the human, leading to a multiverse of fissiparous and alienated clades like that envisaged in Bruce Sterling’s novel Schismatrix. A still more radical disconnection might result if super-intelligent AI’s went “feral”. At this point, the subject of history itself becomes fissionable. It is no longer just about “us”. Perhaps Land remains the most acute and intellectually consistent accelerationist after all.
Roden, David 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Ammon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart. Springer Frontiers Collection.
Srnicek, N.and Williams A (2013), #ACCELERATE MANIFESTO for an Accelerationist Politics, http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/
Sterling, Bruce. 1996. Schismatrix Plus. Ace Books.
Williams, Alex, 2013. “Escape Velocities.” E-flux (46). Accessed July 11. http://worker01.e-flux.com/pdf/article_8969785.pdf.
Hadley Freeman has an engaging interview with Terminator and Avatar director James Cameron on the excellent Guardian web site.
Cameron is underrated by people who think aesthetically realized movies must be all tight-lipped introspection or (worse) draw on prestigious literary sources. Terminator, T2 and Aliens, though, are the epitome of the smart techno-thriller.
There’s a wonderful montage sequence in T2 where the story of the attempt to prevent the Cyberdyne Corporation from inventing the Skynet computer which will unleash nuclear war on humanity is suspended. We see only an advancing highway at night while the disembodied voice of Sarah Connors (aka Linda Hamilton) says “The future always so clear to me had become like a black highway at night. We were in uncharted territory now, making up history as we went along”. It’s juxtaposition worthy of Godard or Renais. And as Freeman reminds us, Cameron’s also written some of the best female parts in recent cinema.
This is part of an ongoing project whose modest goal is to nuance the understanding of humanism in posthumanist philosophy and criticism. Cartesian dualism is one of the main targets of these critiques, but differences between dualisms tend to become obscured in the rush to lay the liberal humanist subject. Here, I’m using Mike Wheeler’s discussion of the Cartesian roots of mainstream cognitive science in Reconstructing the Cognitive World to distinguish between substance dualism (which nobody seems to believe) and explanatory dualism – arguably the orthodoxy in philosophy of psychology, from Kant to Fodor and beyond.
In a much-discussed passage from the fifth part of The Discourse on Method Descartes supports his dualist metaphysics with what we might call the “argument from the impossibility of artificial intelligence”. He claims that no machine (i.e. a biomechanical system such as an animal body) could act in the flexible, adaptable and generally rational way that ordinary humans do:
[We] may easily conceive a machine to be so constructed that it emits vocables, and even that it emits some correspondent to the action upon it of external objects which cause a change in its organs; for example, if touched in a particular place it may demand what we wish to say to it; if in another it may cry out that it is hurt, and such like; but not that it should arrange them variously so as appositely to reply to what is said in its presence, as men of the lowest grade of intellect can do. The second test is, that although such machines might execute many things with equal or perhaps greater perfection than any of us, they would, without doubt, fail in certain others from which it could be discovered that they did not act from knowledge, but solely from the disposition of their organs: for while reason is an universal instrument that is alike available on every occasion, these organs, on the contrary, need a particular arrangement for each particular action; whence it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act. (Descartes 1985, 44).
To ape rationality a mechanical system would need to integrate special purpose mechanisms suited to every occasion (a “look up table” in computational terms – see Wheeler 2005, 34). Since these occasions could ramify to infinity a mechanical system that could generate reliably appropriate behaviour would require a practically infinite number of parts.
In “Theorizing Posthumanism” Neil Badmington reads this as an attempt to police an ontological blockade between the human being and the nonhuman animal. However, in a textbook deconstructive move he claims that its formulation undermines its declared intent. For by identifying the capacity for reasoning with the functional capacity for context sensitive performance, Descartes allows for the conceptual possibility of a machine so complex that it would have an arrangement for every possible occasion – in effect, running an infinite look-up table.
Descartes’ anthropocentrism is thus less hygienic and secure than it might appear, for it implies that a material system with the complexity to generate flexible performances would be functionally rational. “Reason” Badmington writes “no longer capable of “distinguish[ing] us from the beasts, would meet its match, its fatal and flawless double.” (Badmington 2003, 18). He then springs his coup de théâtre:
On closer inspection, in other words, there lies within Descartes’s ontological hygiene a real sense in which, to take a line from one of Philip K. Dick’s novels, “[l]iving and unliving things are exchanging properties” (1996, 223; emphasis in original). Between the lines of the text, the lines of humanism cross themselves (out), and the moment at which humanism insists becomes the moment at which it nonetheless desists. Quite against his will, quite against all odds, Descartes has begun to resemble Deckard, the troubled protagonist of Do Androids Dream of Electric Sheep? … and Blade Runner … , who utterly fails to police the boundary between the real and the fake (Ibid.).
Casting Harrison Ford as Descartes is nice, but too quick. While Descartes may have had reasons to observe ontological hygiene precautions, in this passage, at least, he is attempting to motivate belief in a disembodied mind through inference to the best explanation. It is the manifest functional differences between humans and brutes that institutes the quarantine line, not the dualist ontology of extended matter and non-extended mind (Ryle’s “ghost in the machine”).
Descartes makes an empirical assumption about the limits of mechanical complexity in this passage (not an a priori claim about in-principle-complexity). If we think this holds, we should infer that our synapses are haunted. If we don’t, we shouldn’t. Badmington’s deconstruction requires Descartes be saddled with the assumption that rationality is a matter of appropriate functioning rather than spooked synapses. But in that case, there is a mark of the human (or at least of the rational intellect): namely, the capacity to function like humans do.
Thus we may easily ditch substance dualism while holding that an exhaustive list of the arrangements of matter responsible for each token of appropriate behaviour could not explain the capacity for flexible and rational behaviour. For one, any such account would be too vast to afford descriptive economy. More importantly, it would miss the abstract functional facts that, by hypothesis, distinguish us from the brutes. Thus, as Wheeler argues in Reconstructing the Cognitive World: The Next Step, the rejection of substance dualism is compatible with explanatory dualism: a scientific methodology that turns away from neuroscience and biomechanics to consider the inferential processes or practices which produce intelligent, flexible and adaptable behaviour. According to this view: “flexible, intelligent action remains conceptually and theoretically independent of the scientific understanding of the agent’s physical embodiment” (Wheeler 2005, 51).
Badmington, Neil (2003), “Theorizing Posthumanism”, Cultural Critique 53, 10-27,
Descartes (1985) A Discourse on Method. Trans. John Veitch. London: Everyman.
Wheeler, Michael (2005) Reconstructing the Cognitive World: the Next Step. MIT Press, 2005.