Posthuman Life: The Galapagos Objection

On August 4, 2015, in Uncategorized, by enemyin1

Since Philperc’s Posthuman Life reading group got into gear a month ago, I’ve been dealing with numerous objections to the theses in Posthuman Life. But I’ve not been beset in quite the way I had expected. In my simplicity, I had assumed that the epistemological claims for unbounded posthumanism developed in Chapters 3 and 4 (and in later work on Brandom and hyperplasticity) would be attracting flak from analytical pragmatists and phenomenologists who want to retain a priori constraints on (post)human possibility. Somewhat to my surprise, fire has been concentrated on the positive thesis of SP, and the disconnection thesis (DT) in particular.

Retrospectively, it shouldn’t be all that shocking. The DT is a big, lumbering target. As Rick Searle observes in his review on the IEET site, it is an attempt to impose conceptual uniformity on unknown but conceivably highly diverse conditions while taking full account of our dated ignorance of posthuman natures. The fact it attempts to lay out clear satisfaction conditions for posthumanity is like big “Hit Me!” sign inviting counter-examples, problem cases and deconstructions. Something had to give, it seems.

To date the objections have come from two sides. A critical posthumanist objection (articulated in different forms by Searle and Debbie Goldgaber) exploits an analytic distinction between disruptive technical change internal to the Wide Human network and the agential independence required by DT. This is already implicit in the work on anthropologically unbounded posthumanism, where I argue that our knowledge of posthuman possibility is tenuous.

Well, the argument goes, so is our grasp of wide human possibility. Searle argues that the Wide Human network could diverge from current humanity without disconnecting from it. There could be stuff happening that is a) intrinsically alien or weird and b) does not lead to independence from WH but to a radical transformation or extension of it:

[What] real posthuman weirdness would seem to require would be something clearly identified by Roden and not dependent, to my lights, on his disruption thesis being true. The same reality that would make whatever follows humanity truly weird would be that which allowed alien intelligence to be truly weird; namely, that the kinds of cognition, logic, mathematics, science found in our current civilization, or the kinds of biology and social organization we ourselves possess to all be contingent. What that would mean in essence was that there were a multitude of ways intelligence and technological civilizations might manifest themselves of which we were only a single type, and by no means the most interesting one. Life itself might be like that with the earthly variety and its conditions just one example of what is possible, or it might not.

According to this story, posthumanity (in the sense of a weird succession to current humanity) does not presuppose disconnection. Disconnection is not necessary for posthumanity.

A more radical riposte is owned by Scott Bakker. He argues that the notion of agency that I develop in the Chapter 6 clarification of the Disconnection Conditions is a folk notion that fails to capture the radically non-agential possibilities opened up by a technological singularity. For Bakker, the singularity is the posthuman.

I think he’s right to have issues with my notion of agency. It’s a kluge designed to meet my systematic aims and requires a more detailed metaphysical exposition. For all that, I don’t think Scott has made a persuasive case for expunging agents from our ontology, yet.

In contrast to the critical posthumanists, Jon Cogburn has argued that disconnection may not be sufficient for posthumanity. There are conceivable divergences from the human implied by our current understanding of biology that are trivial and thus do not merit the concern the DT is intended to articulate. He cites the non-sapient fishlike successors of current humans depicted in Vonnegut’s novel Galapagos as examples of trivial posthuman succession. The Disconnection Thesis states that a being is posthuman iff.

  • It has ceased to belong to WH (the Wide Human) as a result of technical alteration.
  • Or it is a wide descendent of such a being (outside WH) (PHL 112)

The fish successors in Galapagos qualify as posthuman trivially according to Cogburn.

Their ancestors underwent mutation due to fallout of a nuclear war. Either they have ceased to belong to WH in virtue of a technical alteration in their environment or qualify as descendants of such beings. Yet they do not constitute an ontological novelty. They are no more weird than any other nonhuman life form and they do not exhibit a particularly high degree of functional autonomy.

Cogburn’s objection is elegant and immensely entertaining – do read it! For this reason alone (and because I’ve responded extensively to Bakker and Goldgaber over at philpercs) I want to focus on it in this post.

As he makes clear, the problem posed by the Galapagos example concerns an apparent ambiguity in the scope of the first condition of DT. If a “technical alteration” is construed to include any change in the world arising indirectly from human technical activity (Nuclear war in this case) then any evolutionary process it catalyzed that resulted in nonhumans with human ancestors would be a posthuman maker. But I want to argue that posthumans would have to have significant functional autonomy (or power) to escape the influence of WH, whereas no such power is implied in Galapagos-type cases. The “posthumous” fishes do not have to break out of a fish farm, for example. WH simply withers away as narrow humans develop in ways that do not suffice to maintain it.

Now, there are various responses to the Galapagos objection. Some of those involve amendments to schematic statement of DT. This has happened before.

Three years ago, Søren Holm pointed out that a similarly trivial result could be achieved if posthumans decided to produce biological humans for wide human descendants that were subsequently reabsorbed into WH. Hence the current stipulation that wide descendants of posthumans remain outside WH.

I think Pete Mandik suggests the way this should go in a Twitter response where he writes, “the solution involves distinguishing between being a technical alteration and being an effect of a T.A” Radiation from a nuclear war is an environmental change: not a technical change but an effect of one. The increased mutation rate resulting in the post-sapient fish people is not a technical change but an effect of one.

This may seem that I’m leaning on a leaky distinction between direct and indirect technical causes here. To say that the increased mutation rate is not a technical change is just to say that it is indirectly rather than directly caused by technical change. However, it could be objected that there is no principled (non-observer-relative) way of distinguishing between the direct and indirect causes in any instance. All causation is mediated by intervening causes if we but look (Experts on the metaphysics of causation might beg to differ of course).

But we can avoid having to make the distinction between direct and indirect technological causes by stipulating that the process of ceasing to be human result from the exercise of technological powers by the disconnecting.

This is not true of the post-people of Galapagos. They do not exercise the technological powers that result in the withering away of WH. They are effects of its exercise by others.

This clarification comports well with the DT and the assemblage theory in which it is framed (more, with the philosophy of technology laid out in Ch7) though it is not an explicit consequence of the schematic formulation. There might be a way of reformulating DT to allow this (along the lines of my response to Holm) but for reasons for time and incompetence, I’ll hold off on that here.

Posthumans like humans have components which instantiate technologies. It doesn’t have to follow that they are technologies, of course. I’m inclined to the view that technologies are abstract particulars concretized in disparate forms and contexts. Vonnegut’s post-people don’t instantiate or exercise such technologies. So they don’t qualify as posthumans.

There are other responses. One could just allow that Vonnegut’s post-people are posthuman but are just boring – not the kind that elicit our moral concern. However, I think the clarification suggested by Mandik provides a more robust response since it makes clear why the DT articulates our moral concern with posthuman possibility. The posthuman – according to this account – is inherently disruptive because of an independence from human ends resulting from the emergence of new technical powers. This independence implies significant functional autonomy because the technical powers exhibited by posthumans are no longer exercised by us.

Beings exhibiting this independence need not be maximally weird, but then I allow for disconnections that would involve posthumans are not radically alien in someway (e.g. genetically engineered super-cooperators, Cylons or some such). In any case, the evocation of the weird is designed to suggest the epistemic scope for divergence (given anthropological unboundedness). Nothing is weird as such or intrinsically unless we allow for the kind of radical transcendence contemplated in negative theology.


Tagged with:



Here‘s the audio for a fizzy discussion on posthumanism in the arts I participated in at the Centre for Cultural Studies Research at the University of East London. We talked monsters, posthuman urbanism, science fiction, the speculative/critical divide in posthumanism, whether immersive media and technological arts might help us overcome entrenched dualisms in western thought and political implications (if any) of deconstructing such binaries.

With Debra Benita Shaw (University of East London, Centre for Cultural Studies Research), Stefan Sorgner (University of Erfurt), David Roden (Open University), Dale Hergistad (X-Media Lab) and Luciano Zubillaga (UWL Ealing School of Art, Design and Media).



No Future? Catherine Malabou on the Humanities

On February 19, 2014, in Uncategorized, by enemyin1

Catherine Malabou has an intriguing piece on the vexed question of the relationship between the “humanities” and science in the journal Transeuropeennes here.

It is dominated by a clear and subtle reading of Kant, Foucault and Derrida’s discussion of the meaning of Enlightenment and modernity. Malabou argues that the latter thinkers attempt to escape Kantian assumptions about human invariance by identifying the humanities with “plasticity itself”. The Humanities need not style themselves in terms of some invariant essence of humanity. They can be understood as a site of transformation and “deconstruction” as such.  Thus for Derrida in “University Without Condition”, the task of the humanities is:

the deconstruction of « what is proper to man » or to humanism. The transgression of the transcendental implies that the very notion of limit or frontier will proceed from a contingent, that is, historical, mutable, and changing deconstruction of the frontier of the « proper ».

Where, as for Foucault, the deconstruction of the human involves exhibiting its historical conditions of possibility and experimenting with these by, for example, thinking about “our ways of being, thinking, the relation to authority, relations between the sexes, the way in which we perceive insanity or illness “.

This analysis might suggest that the Humanities have little to fear from technological and scientific transformations of humans bodies or minds; they are just the setting in which the implications of these alterations are hammered out.

This line of thought reminds me of a revealingly bad argument produced by Andy Clark in his Natural Born Cyborgs:

The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game . . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, 142).

This is obviously broken-backed: that earlier bootstrapping didn’t produce posthumans doesn’t entail  that future ones won’t. Even if humans are essentially self-modifying it doesn’t follow that any prospective self-modifying entity is human.

The same problem afflicts Foucault and Derrida’s attempts to hollow out a reservation for humanities scholars by identifying them with the promulgation of transgression or deconstruction. Identifying the humanities with plasticity as such throws the portals of possibility so wide that it can only refer to an abstract possibility space whose contents and topology remains closed to us. If, with Malabou, we allow that some of these transgressions will operate on the material substrate of life, then we cannot assume that its future configurations will resemble human communities or human thinkers – thinkers concerned with topics like sex, work and death for example.

Malabou concludes with the suggestion that Foucault and Derrida fail to confront a quite different problem. They do not provide a historical explanation of the possibility of transformations of life and mind to which they refer:

They both speak of historical transformations of criticism without specifying them. I think that the event that made the plastic change of plasticity possible was for a major part the discovery of a still unheard of plasticity in the middle of the XXth century, and that has become visible and obvious only recently, i.e. the plasticity of the brain that worked in a way behind continental philosophy’s back. The transformation of the transcendental into a plastic material did not come from within the Humanities. It came precisely from the outside of the Humanities, with again, the notion of neural plasticity. I am not saying that the plasticity of the human as to be reduced to a series of neural patterns, nor that the future of the humanities consists in their becoming scientific, even if neuroscience tends to overpower the fields of human sciences (let’s think of neurolinguistics, neuropsychoanalysis, neuroaesthetics, or of neurophilosophy), I only say that the Humanities had not for the moment taken into account the fact that the brain is the only organ that grows, develops and maintains itself in changing itself, in transforming constantly its own structure and shape. We may evoke on that point a book by Norman Doidge, The Brain that changes itself. Doidge shows that this changing, self-fashioning organ is compelling us to elaborate new paradigms of transformation.

I’m happy to concede that the brain is a special case of biological plasticity, but, as Eileen Joy notes elsewhere, the suggestion that the humanities have been out of touch with scientific work on the brain is unmotivated. The engagement between the humanities (or philosophy, at least) and neuroscience already includes work as diverse as Paul and Patricia Churchland’s work on neurophilosophy and Derrida’s early writings on Freud’s Scientific Project.

I’m also puzzled by the suggestion that we need to preserve a place for transcendental thinking at all here. Our posthuman predicament consists in the realization that we are alterable configurations of matter and that our powers of self-alteration are changing in ways that put the future of human thought and communal life in doubt. This is not a transcendental claim. It’s a truistic generalisation which tells us little about the cosmic fate of an ill-assorted grab bag of  academic disciplines.


Clark, A. 2003. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press.







People and cultures have some non-overlapping beliefs. Some folk believe that there is a God, some that there is no God, some that there are many gods. Some people believe that personal autonomy is a paramount value, while others feel that virtues like honour and courage take precedence over personal freedom. These core beliefs are serious, in that they make a difference to whether people live or die, or are able to live the kinds of life that they wish. People fight and die for the sake of autonomy. People fight, die or (institute gang rapes) in the interests of personal honour.

Some folk – the self-styled pluralists – believe that respect for otherness is a paramount political value. Respecting otherness, they say, is so paramount that it should regulate our ontological commitments – our assumptions about what exists. I must admit that I find this hard to credit ontologically or ethically. But it is also unclear how we should spell the principle out. So I’ll consider two versions that have circulated in the blogosphere recently. The first, I will argue, teeters on incoherence or, where not incoherent, is hard to justify in ethical or political terms. The second – which demands that we build a common world – may also be incoherent, but I will argue that we have no reason to think that its ultimate goal is realisable.

According to Philip at Circling Squares Isabel Stengers and Bruno Latour think that this position should enjoin us to avoid ridiculing or undermining others’ values or ontologies. Further, that we should:

grant that all entities exist and, second, that to say that someone’s cherished idol (or whatever disputed entity they hold dear) is non-existent is a ‘declaration of war’ – ‘this means war,’ as Stengers often says.

I’ll admit that I find first part of this principle this damn puzzling. Even if we assume – for now – that it is wrong to attempt to undermine another person’s central beliefs this principle seems to require a) that people actually embrace ontological commitments that are contrary to the one’s they adhere to; b) pretend not to have one’s core beliefs; c) adopt some position of public neutrality vis a vis all core beliefs.

The first interpretation (a) results in the principle that one should embrace the contrary of every core belief; or, in effect, that no one should believe everything. So (in the interests of charity) we should pass on.

b) allows us to have beliefs so long as they are unexpressed. Depending on your view of beliefs, this is either incoherent (because there are no inexpressible beliefs) or burdens believers that no one is likely to find it acceptable.

So I take Philip to embrace c).  His clarification suggests something along these lines. For example. He claims that it is consistent with respecting otherness to say what we believe about other’s idols but not to publicly undermine their reasons for believing in them. Thus:

Their basic claim seems to be that ‘respect for otherness,’ i.e. political pluralism, can only come from granting the entities that others hold dear an ontology, even if you don’t ‘believe’ in them.  You are thus permitted to say ‘I do not follow that god, he has no hold over me’ but you are not permitted to say ‘your god is an inane, infantile, non-existent fantasy, grow up.’  And it’s not just a question of politeness (although there’s that too).  The point is to grant others’ idols and deities an existence – one needn’t agree over what that existence entails, over what capacities that entity has or what obligations it impresses upon you as someone in its partial presence but to deny it existence entirely is to ‘declare war’ – to deny the possibility of civil discourse, of pluralistic co-existence.

I must admit that I find this principle of respect puzzling as well. After all, some of my reasons for being an atheist are also reasons against being a theist. So unless this is just an innocuous plea for good manners (which I’m happy to sign up to on condition that notional others show me and mine the same forbearance) it seems to require that all believers keep their reasons for their belief to themselves. This, again, seems to demand an impossible or repugnant quietism.

So, thus far, ontological pluralism seems to be either incoherent or to impose such burdens on all believers that nobody should be required to observe it. There is, of course, a philosophical precedent for restricted ontological quietism in Rawls’ political liberalism. Rawls’ proposes that reasonable public deliberation recognize the “burdens of judgement” by omitted any justification that hinges on “comprehensive” ethical or religious doctrines over which there can be reasonable disagreement (Rawls 2005, 54). Deliberations about justice under Political Liberalism are thus constrained to be neutral towards “conflicting worldviews” so long as they are tolerant and reasonable (Habermas 1995, 119, 124-5).

However, there is an important difference between the political motivations behind Rawlsian public reason and the position of “ontological charity” Philip attributes to Stengers and Latour. Rawls’ is motivated by the need to preserve stability within plural democratic societies. Public reason does not apply outside the domain of political discourse in which reasonable citizens hash out basic principles of justice and constitutional essentials. It is also extremely problematic in itself.  Habermas  argues that Rawls exclusion of plural ethical or religious beliefs from the public court is self-vitiating because comprehensive perspectives are sources of disagreement about shared principles (for example, the legitimacy of abortion or same-sex marriage) and these must accordingly be addressed through dialogue rather than circumvented if a politically stable consensus is to be achieved (126).

Finally, apart from being incoherent, the principle of ontological charity seems unnecessary. As Levi Bryant points out in his realist retort to the pluralist, people are not the sum of their beliefs. Beliefs can be revised without effacing the believer. Thus an attack on core beliefs is not an attack on the person holding those beliefs.

So it is hard to interpret the claim that we should grant the existence of others’ “idols” as much more than the principle that it is wrong to humiliate, ridicule or insult people because of what their beliefs are. This seems like a good rule of thumb, but it is hard to justify the claim that it is an overriding principle. For example, even if  Rushdie’s Satanic Verses “insults Islam” having an open society in which aesthetic experimentation and the critical evaluation of ideas is possible is just more important than saving certain sections of it from cognitive dissonance or intellectual discomfort. Too many people have suffered death, terror and agony because others had aberrant and false core beliefs to make it plausible that these should be immune from criticism or ridicule. A little personal dissonance is a small price to pay for not going to the oven.

So what of the principle that we should build a “common world”. This is set out by Jeremy Trombley in his Struggle Forever blog under the rubric of “cosmopolitics”. Jeremy regards this project as an infinite task that requires us to seek a kind of fusion between different word views, phenomenologies and ontologies:

The project, as Latour, Stengers, James, and others have described it, is to compose a common world. What pluralism recognizes is that, in this project, we all start from different places – Latour’s relativity rather than relativism. The goal, then, (and it has to be recognized that this project is always contingent and prone to failure) is to make these different positions converge, but in a way that doesn’t impose one upon the other as the Modern Nature/Culture dichotomy tends to do. Why should we avoid imposing one on the other? In part because it’s the right thing to do – by imposing we remove or reduce the agency of the other. The claim to unmediated access to reality makes us invulnerable – no other claim has that grounding, and therefore we can never be wrong. But we are wrong – the science of the Enlightenment gave us climate change, environmental destruction, imperialism in the name of rationality (indigenous peoples removed from their land and taken to reeducation facilities where they were taught “rational” economic activities such as farming), and so on. It removed us from the world and placed us above it – the God’s eye view.

I think there a number of things wrong with cosmopolitics as Jeremy describes it here.

Firstly, seeking to alter beliefs or values does not necessarily reduce agency because people are not their beliefs.

Secondly, some worldviews – like the racist belief-systems that supported the European slave trade – just need to be imposed upon because they are bound up with violent and corrupting socio-political systems.

Thirdly, I know of no Enlightenment thinker, or realist, for whom “unmediated access to reality” is a sine qua non for knowledge. Let’s assume that “realism” is the contrary of pluralism here. It’s not clear what unmediated access would be like, but all realists are committed to the view that we we don’t have it since if we believe that reality has a mind-independent existence and nature, it can presumably vary independently of our beliefs about it. In its place, we have various doctrines of evidence and argument that are themselves susceptible to revision.  Some analyses of realism suppose that realists are committed to the claim that there is a one true account of the world (the God’s Eye View) but – as pointed out in an earlier post – this commitment is  debatable. In any case, supposing the the existence of a uniquely true theory is very different from claiming to have it.

Finally, much hinges on what we mean by a common world here. I take it that it is not the largely mind-independent reality assumed by the realist since – being largely mind-independent – it exists quite independently of any political project. So I take it that Jeremy is adverting something like a shared phenomenology or experience: a kind of fusion of horizons at the end of time. If we inflect “world” in this sense, then there is no reason for believing that such an aim is possible, let alone coherent. This possibility depends on there being structures of worldhood that are common to all beings that can be said to have one (Daseins, say). I’ve argued that there are no reasons for holding that we have access to such a priori knowledge because – like Scott Bakker – I hold that phenomenology gives us very limited insight into its nature. Thus we have no a priori grasp of what a world is and no reason to believe that Daseins (human or nonhuman) could ever participate in the same one. The argument for this is lengthy so I refer the reader to my paper “Nature’s Dark Domain” and my forthcoming book Posthuman Life.


Habermas, Jurgen. 1995. “Reconciliation through the Public Use of Reason: Remarks on John Rawls’s Political Liberalism.” The Journal of Philosophy 92 (3): 109–131.

Rawls, John. 2005. Political Liberalism. Columbia University Press.




Tagged with:

Rebecca Saxe and Clockwork Orange 2.0

On September 25, 2013, in Uncategorized, by enemyin1


In this excellent presentation Saxe claims that Transcranial Magnetic Simulation applied to the  temporo-parietal junction (TPJ) – a region specialized for mentalizing in human adults – can improve the effectiveness of moral reasoning by improving our capacity to understand other human minds.

This suggests an interesting conundrum for moral philosophers working in the Kantian tradition, where recognizing the rationality and personhood of offenders is held to be a sine qua non for justifications of punishment. We can imagine a Philip K Dick style world in which miscreants are equipped with surgically implanted TMS devices which zap them where an automated surveillance system judges them to be in a morally tricky situation calling for rapid and reliable judgements about others’ mental states. Assuming that such devices would be effective, would this still constitute a violation of the offender’s personhood – treating the offender as a refractory animal who must be conditioned to behave in conformity with societal norms, like Alex in a Clockwork Orange ? Or would the enhancement give that status its due by helping the offender become a better deliberator ?


Assuming the TMS devices could achieve their aim of improving moral cognition, it seems odd to say that this would be a case of “tiger training” which bypasses the offender’s capacity for moral reasoning since it would presumably increase that very capacity. It is even conceivable that an effective moral enhancement could be co-opted by savvy Lex Luthor types to enhance the criminal capacities of their roughnecks, making them more effective at manipulating others and sizing up complex situations. At the same time, it would be quite different from punishment practices that appeal to the rational capacities of the offender. Having one’s TPJ zapped is not the same as being asked to understand the POV of your victim – though it might enhance your ability to do so.

So an effective moral enhancement that increases the capacity for moral reasoning in the cognitively challenged would neither be a violation of  nor an appeal to to their reason. It would not be like education or a talking therapy, but neither would be like the cruder forms of chemical or psychological manipulation. It could enhance the moral capacities of people but it would do so by tying them into technical networks that, as we know, can be co-opted for ends that their creators never anticipated. It might enhance the capacity for moral agency while also increasing its dependence on the vagaries of wider technical systems. Some would no doubt see such a development as posthuman biopower at its most insidious. They would be right, I think, but technology is insidious precisely because our florid agency depends on a passivity before cultural and technical networks that extend it without expressing a self-present and original human subjectivity.


In this highly illuminating talk from EXPO1 at MOMA, Ray proposes that there is nothing inherently wrong with the transhuman reengineering of nature on the “promethean” grounds that nature has no ethical dispensation. Thus there is no natural, ontological or theological order violated by the extension of human cognitive powers or by the creation of synthetic life. Such processes are potentially violent and destructive, but that is acceptable as long as we distinguish between “good” emancipatory violence and that which oppresses and restricts the life chances of rational subjects.

I’m wholly in agreement with Ray in his rejection of theological objections to the technological refashioning of human and non-human nature. I’m less convinced that the idea of emancipation is an adequate horizon within which to adjudicate between the new world-engines that might lie before us. But I agree that we need some ethically substantive framework in which to do this. My own leaning is increasingly towards a pluralist moral realism – the claim that there are objectively good or bad locations in Posthuman Possibility Space but no moral hierarchy in which these are enfolded in turn. So to adjudicate these we need to “sample” them by experimenting with bodies, things and minds.

Ray also peppers his talk with some references to J G Ballard’s short story “The Voices of Time”, one of his many narratives of ontological catastrophe. Ballard’s own position on emancipation is profoundly ambivalent, as Baudrillard observes. Something to return to in later post or article, I think.

Tagged with:

The Extended Mind Works!

On May 10, 2013, in Uncategorized, by enemyin1

Deep into the morning procrastination ritual – reading two or more blogs and FB instead of the chapter I’m meant to be finishing – I realized that I had forgotten what I had been reading a minute ago. So I let my mouse hover over the IE icon on my task bar and hey presto! I saw a “mouse over” preview of the Discover Post on identical twins I had been perusing. Moral: the extended mind works, but it needs metacognition to patch its resources together.

Distracted from distraction by distraction T S Eliot, Burnt Norton