This is an abstract for a presentation that I will be giving in a roundtable discussion on posthumanism and aesthetics with Debra Benita Shaw and Stefan Sorgner at the University of East London on May 18 2015. Further details will be made available.
Posthumanism can be critical or speculative. These positions converge in opposing human-centred (anthropocentric) thinking. However, their rejection of anthropocentricism applies to different areas. Critical Posthumanism (CP) rejects the anthropocentrism of modern philosophy and intellectual life; Speculative Posthumanism (SP) opposes human-centric thinking about the long-run implications of modern technology.
CP is interested in the posthuman as a cultural and political condition. Speculative Posthumanists propose the metaphysical possibility of technologically created nonhuman agents. SP states: there could be posthumans – where posthumans would be “wide human descendants” of current humans that have become nonhuman in virtue of some process of technical alteration.
In Posthuman Life I elaborate a detailed version of SP. Specially, I describe what it is to become posthuman in terms of “the disconnection thesis” [DT] (Roden 2012; 2014, Chapter 5). DT understands “becoming posthuman” in abstract terms. Roughly, it states that an agent becomes posthuman iff. it becomes independent of the human socio-technical system as a consequence of technical change. It does not specify how this might occur or the nature of the relevant agents (e.g. whether they are immortal uploads, cyborgs, feral robots or Jupiter sized Brains).
Posthuman Life argues that the abstractness of DT is epistemologically apt because there are no posthumans and thus we are in no position to deduce constraints on their possible natures or values (I refer to this position as “anthropologically unbounded posthumanism” [AUP)). AUP has implications for the ethics of becoming posthuman that are generally neglected in the literature on transhumanism and human enhancement.
The most important of these is that there can be no a priori ethics of posthumanity. Becoming posthuman can only be substantively (as opposed to abstractly) understood by making posthumans or becoming posthuman. I argue that, given the principled impossibility of a prescriptive ethics here, we must formulate strategies for speculating on and exploring nearby “posthuman possibility space”.
In this paper, I propose that aesthetic theory and practice may be a useful political model for such technological self-fashioning because it involves styles of thought or creation that discover their constraints and values by producing them. This “production model” is, I will argue, the only one liable to serve us if, with CP/SP, we reject an anthropocentric privileging of the human. I finish by considering some examples of aesthetic practice that might provide models for the politics of making posthumans or becoming posthuman.
Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientific and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.
Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.
Eric Schwitzgebel has a typically clear-eyed, challenging post on the implications of (real) artificial intelligence for our moral systems over here at the Splintered Mind. The take home idea is that our moral systems (consequentialist, deontologistical, virtue-ethical, whatever) are adapted for creatures like us. The weird artificial agents that might result from future iterations of AI technology might be so strange that human moral systems would simply not apply to them.
Scott Bakker follows this argument through in his excellent Artificial Intelligence as Socio-Cognitive Pollution , arguing that blowback from such posthuman encounters might literally vitiate those moral systems, rendering them inapplicable even to us. As he puts it:
The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development thatraises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines.
As any reader of Posthuman Life, might expect, I think Erich and Scott are asking all the right questions here.
Some (not me) might object that our conception of a rational agent is maximally substrate neutral. It’s the idea of a creature we can only understand “voluminously” by treating it as responsive to reasons. According to some (Davidson/Brandom) this requires the agent to be social and linguistic – placing such serious constraints on “posthuman possibility space” as to render his discourse moot.
Even if we demur on this, it could be argued that the idea of a rational subject as such gives us a moral handle on any agent – no matter how grotesque or squishy. This seems true of the genus “utility monster”. We can acknowledge that UM’s have goods and that consequentialism allows us to cavil about the merits of sacrificing our welfare for them. Likewise, agents with nebulous boundaries will still be agents and, so the story goes, rational subjects whose ideas of the good can be addressed by any other rational subject.
So according to this Kantian/interpretationist line, there is a universal moral framework that can grok any conceivable agent, even if we have to settle details about specific values via radical interpretation or telepathy. And this just flows from the idea of a rational being.
I think the Kantian/interpretationist response is wrong-headed. But showing why is pretty hard. A line of attack I pursue concedes to Brandom-Davidson that that we have the craft to understand the agents we know about. But we have no non-normative understanding of the conditions something must satisfy to be an interpreting intentional system or an apt subject of interpretation (beyond commonplaces like heads not being full of sawdust).
So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not.
And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment.
Or they might eat our brainz first.
There’s a lively debate around Scott Bakker’s recent lecture: “The End of the World As We Know It: Neuroscience and the Semantic Apocalypse” given at The University of Western Ontario’s Centre for the Study of Theory and Criticism here at Speculative Heresy. The text includes responses from Nick Srnicek and Ali McMillan.
This piece from Una Sinnott is delicious. It demonstrates how work in the arts (here, experimental music) can feed into fundamental technologies, which can then hop between disparate applications (radio-controlled torpedoes, GPS). It’s case study in how womens’ contributions to technology get marginalised, and how patriarchy blows back.
“The Sobornost Station is large enough to have its own weather. The ghost-rain inside does not so much fall but shimmers in the air. It makes shapes and moves, and gives Tawaddud the constant feeling that something is lurking just at the edge of her vision.
She looks up, and immediately regrets it. Through the wet veil, it is like looking down from the top of the Gomelez shard. The vertical lines far above pull her gaze towards an amber-hued, faintly glowing dome almost a kilometer high, made of transparent, undulating surfaces that bunch together towards the centre, like the ceiling of a circus tent, segmented by the sharply curving ribs of the Station’s supporting frame.
Forms like misshapen balloons float beneath the vault. At first they look random, but as Tawaddud watches, they coalesce into shapes: the line of a cheekbone and a chin and an eyebrow. Then they are faces, sculpted from air and light, looking down on her with hollow eyes.”
(Rajaniemi 2012, 82)
Rajaniemi, Hannu (2012). The Fractal Prince. St Ives: Gollanz.
Stopped over in Athens Airport trying to digest three days at the Posthuman Politics conference at Mytilini, Lesbos, 25-28 September. It was an intense experience on so many levels and utterly worthwhile. My work has veered into some relentlessly abstract places recently, because someone has to … But having the privilege of attending Jaime del Val’s metahuman performance and Stefan Lorenz Sorgner’s star turn on metahumanist pedagogy was formative.
I’m not done with posthumanist metaphysics, or Scott’s semantic Götterdämmerung, but Stefan and Jaime are forging a value-pluralist posthuman politics with a real chance of productively mapping human-posthuman modes of embodiment and experience within an interdisciplinary framework. For what it’s worth, I think their open-textured practice may constitute our most tenable (if still precarious) path through the posthuman predicament. It has direct implications for public policy (e.g. Stefan’s argument for genetic engineering in education) – perhaps even for getting out of the neoliberal quagmire. None of this, of course, begins to convey the energy and intellectual openness of the event or the delightful hospitality of Evi Sampanikou and the humans and nonhumans of the University of the Aegean.
Continuing the “dark” posthumanism strand from recent blog posts and from my book Posthuman Life: Philosophy at the Edge of the Human (Routledge 2014), I argue that we cannot extend our moral thinking to certain portions of “posthuman possibility space” because our folk psychology and parochial norms of practical reasoning might not apply to “hyperplastic” posthumans. I conclude that there are no good ground to reject the possibility that there are non-persons every bit as morally considerable as persons. Paper on academia.edu here.
According to the Disconnection Thesis (Roden 2012; 2014: Chapter 5) a posthuman is an agent descended from some part of the human socio-technical system that has “gone feral”. In its ancestral form, it may have served human ends, or have been narrowly human itself, but (post-disconnection) has accrued values and roles elsewhere.
To date there are no posthumans so we can only guess at their likely powers. But it seems safe to assume that anything capable of cutting out of the human system would need to be at least as flexible and adaptable as humans are themselves.
These powerful entities might be indifferent to humans, but they may not like us at all; or like us in ways we would not like to be liked. They may view us as a threat, or they may be immensely powerful sadists who devote some part of their technological prowess to killing and torturing us. If posthumans are conceivable, so are very bad posthumans.
So can we do some contingency planning to ensure against the emergence of posthuman dark lords? To do this we would need some handle on the kind of current technologies that might induce a dark lord disconnection (DLD). But what kinds of technologies could these be?
It might seem that some technological possibilities can be discerned a priori – by consulting reliable conceptual “intuitions” about the extendible powers of current technologies. For example, a being like Skynet – the genocidal military computer in James Cameron’s Terminator films – seems a plausible occupant of a posthuman timeline; whereas Sauron, the supernatural dark lord of Tolkien’s Lord of the Rings, does not. However, since the work of Saul Kripke in the 1970’s many philosophers have come to accept that there are a posteriori natural possibilities and necessities that are only discoverable empirically. That light has a maximum velocity from any reference frame upsets common sense intuitions about relative motion and could not have been discovered by reflecting on pre-relativistic concepts of light.
Claims about hypothetical technological possibility may be as vulnerable to refutation as naive physics. States like the US and China employ computers to co-ordinate military activities so a Skynet seems the more plausible posthuman antagonist. But the fact that there are computers but no supernatural dark lords does not entail that their capacities could be extended in any way we imagine. Light bulbs exist as well as computers, but maybe a Skynet is no more technologically possible than Byron the Intelligent light bulb in Thomas Pynchon’s fabulist novel Gravity’s Rainbow.
So here’s a thing. Posthuman Possibility Space (the set of technically possible routes to disconnection) may contain a Dark Lord Possibility Sub-Space – the trajectories all of which lead to a DLD! We may not have any reliable indication of what (if anything) belongs to it. But, quite possibly, it is out there, waiting.
Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientifc and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.