Synthetic Biology Tutorial

On March 29, 2012, in Uncategorized, by enemyin1

Anyone thinking of trying it at home. See here.

For DIY Synthbio page see here.


Tagged with:

So What’s Wrong with Humanism Anyway?

On March 6, 2012, in Uncategorized, by enemyin1


I feel like a humanist fellow traveller because I am quite obviously some sort of humanist. Many who call themselves “humanists” support ethical positions I sign up to. A slack religious schooling has left me with a distaste for sublimated theocrats who attack secularism on grounds of cultural autonomy. The slick misanthropy of Lars von Trier’s Melancholia is a turn off.  I think those in the ecology movement who believe biomass is worth dying for should act on their principles.

The default humanism that I’ve owned is (something like) a claim for the singularity and intrinsic value of human life. As John Appleby observes in his 2010 piece for the New Humanist “Man & Other Beasts”, this is defensible anthropocentrism. If humans have distinctive moral capacities like biographical self-awareness and autonomy they have a claim to special treatment:

One of the grounding principles of humanism, agreed upon by both its supporters and detractors, is some form of anthropocentrism. However, anthropocentrism is not the same thing as speciesism and while the latter may well be ethically dubious, it is not at all clear that the former is necessarily so. For example, if her apartment block was on fire, who would Donna Haraway rescue first: her dog or her neighbour’s baby? From a humanist perspective, the answer seems obvious. Yet posthumanism and some branches of animal studies would seem to advocate the dog and you don’t have to be a Kantian to find that grotesque.

Speciesism is wrong for the reasons that racism and heterosexism are wrong. They refuse the protection and recognition due to those possessing capacities like autonomy and self-awareness on morally irrelevant grounds like skin colour or sexual preference. Discriminating in favour of the baby is not arbitrary. A human baby has a claim that the dog (for all its moral or cognitive virtues) lacks because it has a distinctive capacity to acquire distinctive moral capacities.

So is my so-called posthumanism pusillanimous or just horribly conflicted?

It is preeminently “speculative” rather than “critical” (or so I’ve claimed); focused on the contingency and limits of the human rather than its philosophical integument. Still, there are certain variants of anthropocentrism to which it is opposed.

To sort these out it’s necessary to do some distinction mongering. We can refer to the claim that humans are morally distinctive entities as “Simple Humanism” (SH). Simple Humanism (SH) distinguishes in kind between humans and nonhumans by ascribing separate capacities or values. The worst accusation that can be leveled at SH is that it attributes falsely. If it turns out that protists feel pain, joy and humiliation or exercise autonomy, current programs for the eradication of Malaria will need to be re-evaluated.

The second-worst accusation one can level at SH is incompleteness. Perhaps there are capacities of equal or greater weight than autonomy or self-awareness that humans downgrade or lack but which actual or possible nonhumans such as Meillassoux’s inexistent God or Charlie Stross’ Jupiter-sized Brains might possess. In The Nichomachean Ethics Aristotle argues that contemplation of eternal truths is a higher good than the exercise of practical reason and, while humanly attainable, is characteristically divine rather than  human. So Aristotle counts as a Simple Humanist but one who hedges his anthropocentrism in important and defensible ways.

The humanisms that provide the critical target of modern posthumanisms and their precursory anti-humanisms tend to be inheritors of Immanuel Kant’s “Copernican Revolution”. Kant explained the possibility of a priori knowledge by proposing that human minds construct or constitute the world rather than passively represent it. Kant does not just distinguish humans and nonhumans but makes the distinction central to his theory of objectivity and value. The most entrenched versions of this modern anthropocentrism are Instrumentalism and Transcendental Humanism (TH) which assert that the agency or being of nonhumans asymmetrically depend (a-depend) on that of humans:

  • Instrumentalism claims that only humans or persons are authentic actors while the agency of nonhumans like animals or tools is constructed by or derived from the agency of humans.
  • TH claims that the objectivity or being of nonhumans a-depends on humans.

Both positions are clearly defensible. In Philosophy of Technology instrumentalism (tool function a-depends on tool use) is the one that all critical theories of technology have to flog; and those (like Brandom and Davidson) who think that intentionality is “an essentially linguistic affair” ought to be instrumentalists about the putative intentionality of nonhumans.

Likewise, TH is extraordinarily well placed to legitimate a critical function for Philosophy because it can found its methodology on mooted invariants of human rationality or experience.

However, both place humans (or persons) in the position of “transcendental organizers”: implying a fundamental dualism, which defies natural or historical explanation – an accusation leveled at proponents of human exceptionalism by thinkers as otherwise disparate as Latour, Dennett and Delanda.

They are ethically problematic to the extent that they discount modes of being, affect or agency that are not paradigmatically human or wholly inhuman.  Simple Humanism on the other hand merely asserts human distinctiveness and intrinsic worth rather than any constitutive role for human rationality or phenomenology in delineating the differences and values of others.  It is entirely compatible with recognizing the distinctiveness and intrinsic worth of nonhumans. Thus it is possible for a philosopher to be a rigorous posthumanist, a simple humanist, and not a wuss.


Tagged with:

Superheroes With Dirty Hands

On November 24, 2011, in Uncategorized, by enemyin1


Alan Moore’s graphic novel Watchmen (filmed back in 2009 by Zack Snyder) is an anti- superhero tale about super anti-heroes. Some of these ‘costumed adventurers’ are obsessives driven by the thrill of dressing up and breaking heads; others are co-opted by political interests or have shadowy agendas of their own. The Watchman known as ‘The Comedian’ is an amoral killer on a fat CIA remittance. The only one with actual superpowers, the glowing blue god Dr Manhattan, casually maintains US nuclear Hegemony, but sees humanity as a lower order of being than the inert desert of Mars.

Watchmen honours superhero tradition by sheathing these vigilantes in improbable tights and by culminating in a desperate battle to prevent a maniac killing lots of Americans. Here, though, the balletic combat is futile. As snippets of broadcast TV testify, the Americans are long dead before the first blow lands, and the architect of the plan, Ozymandias – AKA the ‘Worlds Smartest Man’ – is just a Watchman with a self-prescribed remit to usher in an era of global peace.

Ozymandias informs his fellow Watchmen that he has saved the world from nuclear annihilation by gulling the US and USSR into uniting against an illusory alien menace (the story is set in an alternative 1980’s during Nixon’s third term). To simulate this threat convincingly, though, he has had to kill half the population of New York with a vile artificial life form.

Ozymandias seems like an obvious candidate for villain (This is a comic book after all). Yet whether this is so, turns on the solution to the classic philosophical problem of ‘dirty hands’.

Ozymandias’ provides a consequentialist argument for his actions. Pure consequentialists believe that actions must be judged according to the value of their outcomes. Thus if the murder of a million New Yorkers is preferable to the death of billions in a nuclear war, it is better to murder a million New Yorkers.

Once they learn that nothing can prevent the deaths, all but one of the Watchmen agree they are ‘morally checkmated’.

Only Rorschach – so-called for the mutating inkblot concealing his face – contests this. He holds that some actions are intrinsically wrong and must be condemned irrespective of any beneficial outcomes they may produce (the philosophical term for this position is deontological ethics).

Who’s right?

Let’s assume for that Ozymandias is factually correct in believing that humanity would have been destroyed had he not acted. This is the kind of thing we might expect the World’s Smartest Man to know. But Ozymandias has committed murder on the scale of a Hitler or a Pol Pot. Surely, his actions are wrong, no matter what?

So is Rorschach right?

Well, if he is, then Ozymandias should be killed or punished and the plot revealed. But Rorschach’s insistence seems wrong-headed. As Dr Manhattan points out ‘Exposing this plot, we destroy any chance of peace, dooming the Earth to worse destruction’.

Moore reinforces this impression by portraying Rorschach as a moral fanatic obsessed with punishment for its own sake. Ozymandias, by contrast, appears reasonable and genuinely pained by the deaths he has caused. So we seem confronted with four alternatives.


Ozymandias is right to sacrifice millions to save billions.

Or Rorschach is right.

Or they are both wrong.

Or they are both right.

The last possibility can be discounted if their positions are genuine contraries. However, there is another way of interpreting these moral claims. In his famous work, The Prince, Nicolo Machiavelli argued that canons of moral right have little place in politics. When deciding the future of a state (or a planet), we should be prepared to commit evil acts to secure the paramount goal of political order.

Machiavelli’s position is a kind of consequentialism. However, he does not claim that conventionally evil acts cease to be bad when performed for worthy political ends. If judged according to the principles of public morality they are necessary and bear testimony to the prowess of a Prince (or a Superhero). But they’re still wrong according to the standards of personal morality.

Thus, adopting Machiavelli’s position, we can regard Ozymandias as having performed a very ‘dirty’ but necessary act. Both his position and Rorschach’s can then be affirmed on different grounds. Is this a satisfactory resolution of the dilemma? One could object that any claim that an act is politically necessary must involve an appeal to moral grounds if it is not to be merely cynical – and Ozymandias, unlike the Comedian, is no cynic. Thus it remains troublingly uncertain whether this anti-superhero tale contains a genuine super-villain.

© The Open University 2011, all rights reserved

More Open University Philosophy @

OpenLearn: Introducing Philosophy

A222: Exploring Philosophy


References: Alan Moore and Dave Gibbon, Watchmen, New York: DC Comics, 1987. Zack Snyder, Watchmen (2009).

Tagged with:


Just had time to peruse the programme for the Transforming The Human Conference, Dublin City University Oct 21-3. It has less mainstream analytic bioethics than I expected and more stuff on the ‘metaphysics’ of transhumanism and posthumanism. I think this emphasis is correct. We can’t take folk notions of personhood, embodiment or mind for granted when mapping and evaluating paths through ‘posthuman’ possibility space.

Posthumanism and Flat Ontology

On September 6, 2011, in Uncategorized, by enemyin1

Here’s the pre-publication version of an essay in the Springer Frontiers Collection which uses flat ontology to consider the nature and evaluation of posthuman lives.


The Disconnection Thesis

Forthcoming in The Singularity Hypothesis: A Scientific and Philosophical Assessment, Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart (eds.), Springer Frontiers Collection.

In this essay I claim that Vinge’s idea of a technologically led intelligence explosion is philosophically important because it requires us to consider the prospect of a posthuman condition succeeding the human one. What is the “humanity” to which the posthuman is “post”? Does the possibility of a posthumanity presuppose that there is a ‘human essence’, or is there some other way of conceiving the human-posthuman difference? I argue that the difference should be conceived as an emergent disconnection between individuals, not in terms of the presence or lack of essential properties. I also suggest that these individuals should not be conceived in narrow biological terms but in “wide” terms permitting biological, cultural and technological relations of descent between human and posthuman. Finally, I consider the ethical implications of this metaphysics If, as I claim, the posthuman difference is not one between kinds but emerges diachronically between individuals, we cannot specify its nature a priori but only a posteriori. The only way to evaluate the posthuman condition would be to witness the emergence of posthumans. The implications of this are somewhat paradoxical. We are not currently in a position to evaluate the posthuman condition. Since posthumans could result from some iteration of our current technical activity, we have an interest in understanding what they might be like. It follows that we have an interest in making or becoming posthumans.



Tagged with:

My extended abstract on the ‘Disconnection Thesis’ is now available on the website for the forthcoming Springer Frontiers Book, The Singularity: A Scientific and Philosophical AssessmentAmnon EdenJohnny SørakerJim Moor, and Eric Steinhart (eds.).

Tagged with:

Stop Dave, I’m Afraid.

On March 24, 2011, in Uncategorized, by enemyin1


Here’s a link to an intriguing blog post and paper by Professor of Law at the Brookings Institute, James Boyle on the implications for prospective developments in AI and biotechnology for our legal conceptions of personhood. The paper opens by considering the challenges posed by prospects of Turing-capable artificial intelligences and genetic chimera.

On the Very Idea of a Super-Swarm

On March 8, 2011, in Uncategorized, by enemyin1

The best we can do to understand a post-singularity dispensation, Virnor Vinge argues, is to draw parallels with the emergence of an earlier transformative intelligence: “[What] happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind” (Vinge 1993).

Vinge’s analogy implies that we could no more expect to understand a post-singularity mind than a rat or non-human primate – lacking refined propositional attitudes – could understand justice, number theory, and public transportation.

While this does not provide a rigorous theory of the human-posthuman difference (indeed, I would argue that no such theory could be possible a priori) it captures a central ethical worry about the implications of radical enhancement.

This is that prospective technically modified successors to humans may be not be recognizable as potential members of a communal ‘we’ or prone to recognize humans as ethically significant others. It is not the prospect of technical modification to humanity per se (wide or narrow) that concerns us but changes engendering beings so ‘alien’ that there would no longer be a basis for affinity, citizenship or shared concern with humans.

One reason why this might occur is suggested in Vinge’s worry that a posthuman reality could be “Too different to fit into the classical frame of good and evil” (Vinge 1993). Otherwise put, a post-singularity dispensation might include aliens minds, phenomenologies and values so different from those supervening on either narrow human biology or upon what I refer to as our ‘wide human‘ systems (e.g. enculturation into propositionally structured languages) that human-posthuman communication, co-operation or co-evaluation is impossible or pointless.

For example, public ethical frameworks in the secular democracies and beyond presuppose that candidates for our moral regard have similar phenomenologies, if only in sentient capacities for pain, fear, or enjoyment. However, most of these have more maximal conditions. Liberals, for example, place great emphasis on the human capacity for moral autonomy, allowing us, in Rawls words, to ‘to form, to revise, and rationally to pursue a conception of the good’ (Rawls 1980, 525).

While theories of autonomy vary hugely in their metaphysical commitments, most require that candidates for moral personhood be capable of reflecting upon their lives and projects and thereby on the values expressed in the actions, lives and projects of their fellow persons. Arguably, this capacity has cognitive, affective and phenomenological preconditions. Cognitively, it presupposes the capacity for higher-order representation (to represent one’s own or others’ beliefs, desires, etc.). Affectively, it presupposes the capacity for feelings, emotions, and affiliations that form a basis for evaluating a life.  Phenomenologically it presupposes that persons experience the world as (or as if they were) a persistent subject or ‘self’.

Without the cognitive preconditions, no rational evaluation of values or social life would be possible. Without the affective and phenomenological preconditions, these evaluations would lack point or salience. A cognitive system incapable of experiencing itself as a persistent subject might have a purely formal self-representation (like my current thought about a region on my big toe), but could not experience humiliation, resentment or satisfaction that its life is going well because these attitudes require a rich apperceptive experience of oneself as a persistent subject.

This claim does not, it should be noted, entail metaphysical commitment to a substantial or metaphysically real self, but to a subjective phenomenology: the experience of being a self. It is possible and even likely, as Thomas Metzinger has argued, that our first person phenomenology is a ‘functionally adequate but representationally unjustified fiction’ resulting from the fact that the neural processes that generate our sense of embodied and temporally situated selfhood are phenomenally (if not cognitively) inaccessible to components of the system responsible for meta-representing its internal states (Metzinger 2004, 58, 279). If Metzinger is right, the self around whom my egocentric fears and ambitions revolve does not exist. As for Nietzsche, this implies that our experience of the self as a source of agency is likewise illusory, for it only reflects our unawareness of the chains of causation leading to our decisions and actions (Nietzsche 1992, 218-219, cited in Sommers 2007).

So the issue here is not metaphysical adequacy of public ethical frameworks like liberalism or virtue ethics but their applicability in a posthuman future. Allowing for arguable exceptions (such as Buddhism), they may all rest on a metaphysical error. However, the propensity for self-evaluation, feeling ‘reactive attitudes’ to the quality of others’ attitudes, attributing responsibility and praise, etc. all presuppose first person phenomenology and, arguably, are necessary for human social forms. Thus the ‘user illusion’ of persistent selfhood may be functionally necessary for human life because necessary for any culturally mediated experience of moral personhood.

However, Vinge argues that a super-intelligent AI++ might lack awareness of itself as a persistent “subject”.

Some philosophers might regard this prospect with scepticism. After all, if having subjectivity, or Dasein, etc., is a condition for general intelligence, a subjectless posthuman could not be regarded as generally intelligent. However, the validity of such objections would hinge a) on the scope of any purported deduction of the subjective conditions of experience or objective knowledge and b) on the legitimacy of transcendental methodology as opposed, say, to naturalistic accounts of subjectivity and cognition.

If, as writers such as Metzinger, Daniel Dennett or Michael Tye argue, we can naturalize subjectivity by analysing it in terms of the causal-functional role of representational states in actual brains, then it is legitimate to speculate on the scope for other role-fillers. Even if all intelligences need Dasein, it doesn’t follow that all modes of Being-in-the-world are equivalent or mutually comprehensible. Our Dasein, Metzinger emphasizes, comes in a spatio-temporal pocket (an embodied self and a living, dynamic present):

[The] experiential centeredness of our conscious model of reality has its mirror image in the centeredness of the behavioral space, which human beings and their biological ancestors had to control and navigate during their endless fight for survival. This functional constraint is so general and obvious that it is frequently ignored: in human beings, and in all conscious systems we currently know, sensory and motor systems are physically integrated within the body of a single organism. This singular “embodiment constraint” closely locates all our sensors and effectors in a very small region of physical space, simultaneously establishing dense causal coupling (see section 3.2.3). It is important to note how things could have been otherwise—for instance, if we were conscious interstellar gas clouds that developed phenomenal properties (Metzinger 2004, 161).

A post-human swarm intelligence composed of many mobile units might distribute its embodiment or presence to accommodate multiple processing threads in multiple presents. We might not be able to coherently imagine or describe this phenomenology, but our incapacity to imagine X is, as Dennett emphasizes, not an insight into the necessity of not-X (Dennett 1991, 401; Metzinger 2004, 213).

The inaccessibility of the posthuman and the posthuman impasse

If artificial intelligences or other potential entities of the kind grouped under the ‘posthuman’ rubric could have non-subjective phenomenologies, then there are prima facie grounds for arguing that they would be both hermeneutically and evaluatively inaccessible for contemporaneous humans or for modestly augmented transhumans – we might refer to both variants of humans using Nicholas Agar’s neologism ‘MOSH’:  Mostly Original Substrate Human (Agar 2010, 41-2).

The alienness and inaccessibility of such beings would not be due to weird body plans or, directly, superhuman intelligence. There are numerous coherent SF speculations in which humans, intelligent extra-terrestrials, cyborgs and smart, loquacious AI’s communicate, co-operate, manipulate one another, argue about value systems, fight wars, and engage in exotic sex. However, these democratic transhumanist utopias or galactic empires are predicated on narrow humans and narrow non-humans (whether ET’s or droids) sharing the functional requirements for subjective phenomenology and moral personhood. The kind of beings that might result from Vinge’s transcendental event, however, could lack the phenomenological self-presentation which grounds human autonomy while having phenomenologies and metarepresentational capacities that would elude human comprehension.

As I have suggested elsewhere, this prospect represents a possible impasse for contemporary transhumanism rooted, as it is, in these public ethical frameworks grounded on conceptions of autonomy and personhood. How should transhumanists respond to the possibility that their policies might engender beings whose phenomenology and thought might exceed both our hermeneutic and evaluative grasp?

On the Very Idea of an Impasse: A Davidsonian objection

Donald Davidson’s objections to the intelligibility of radically incommensurate or alien conceptual schemes or languages might give us grounds to be suspicious of the very idea of the radically alien intelligences. In ‘On the Very Idea of a Conceptual Scheme’, Davidson suggests that theories of incommensurability must construe conceptual schemes: in terms of a Kantian scheme/content dualism; or a relation ‘fitting’ or ‘matching’ between language and world. Davidson claims that the Kantian trope presupposes that the thing organized is composite, affording comparison with our conceptual scheme after all (Davidson 2001a, 192). Since incommensurability implies the absence of such a common point of comparison, the propositional trope – fitting the facts or the totality of experience, or whatever – is all that is left. For Davidson, this just means that the idea of an acceptable conceptual scheme is one that is mostly true (Ibid. 194). So an alien conceptual scheme or language by these lights would be largely true but uninterpretable (Ibid.).

For Davidson’s interpretation-based semantics, this is equivalent to a language recalcitrant to radical interpretation. But the assumption that alien linguistic behaviour generates largely true ‘sentences’ is just the principle of charity that radical interpreter must assume when testing a theory of meaning for that language.

To re-state this in terms of the current problematic, if alien posthumans had minds, they would have a publicly accessible medium which tracks truths; allowing us to test a semantics for alienese.

Davidson holds that knowledge of an empirical theory specifying the truth conditions of arbitrary sentences of a language would suffice for interpreting the utterances of its speakers (given knowledge that the theory in question was interpretative for it). If we allow this (ignoring, for now, the standard objections to the claim that a truth theory for L would be, in effect, be a theory of meaning for it), then that posthumans having minds at all would entail their interpretability in principle for beings with different kinds of minds.

So does Davidson’s hermeneutics of radical interpretation rescue transhumanism from aporia by deflating the idea of the radical alien?

I think not. Firstly, we have to relinquish the idea that our interpretative knowledge of the radical alien must consist in some explicit formal device such as a Tarskian truth theory. The role of formal semantics in Davidson’s work is to explicate our informal comprehension of language. An interpretative theory can be implicit in an interpreter’s pre-reflective grasp of the inference relationships of a language and her ability to match truth conditions with true utterances (Davidson 1990, p. 312).

Now suppose a human radical interpreter is required to interpret a really ‘weird’ posthuman such as an ultra-intelligent swarm. Davidson’s semantics provides grounds for believing that the swarm-mind would not be a cognitive thing-in-itself: inaccessible in principle to minds of a different stamp. However, this entails that if we could learn to follow whatever passes for inferences for the swarm and track the recondite facts that it affirms and denies, we would understand swarmese. But contingencies might hinder attempts by any MOSH’s in the area to understand the swarm medium of thought, even given principled interpretability.

Even if we suspend the assumption that interpretative knowledge must consist in a formal theoretical model, it is not clear that we can suspend the constraint that it constitutes beliefs or issues in sentences about the truth conditions of sentences or sententially structured attitudes.

However, the public medium employed by a swarm could be non-propositional it nature and thus not straightforwardly expressible in sentential terms. For example, it might be a non-symbolic system lacking discrete expressions. Simulacra – as the computer scientist Brian MacLennan refers to these continuous formal systems – would, by hypothesis, be richer and more nuanced than any discrete language (MacLennan 1995; Roden Forthcoming). Their semantics as well as their syntax would be continuous in nature. The formal syntax and semantics for a simulacrum can be represented symbolically in continuous mathematics but an interpretation of a non-discrete representational system with a discrete one could be massively partial since it would have to map discrete symbols onto points of a continuum. Thus whereas a discrete system might distinguish the proposition P from its negation using the binary operator ‘Not’ via a semantic mapping onto one of two semantic values ({true, false}) a non-discrete equivalent could have any number of shadings between P and its negation.

The effectiveness of any propositional interpretation of a simulacrum would hinge on the dynamical salience of these shadings within the cognitive dynamics of the system under interpretation. Most of the shadings between ‘Snow is white’ and ‘Snow is not white’ might be differences that make no difference for the swarm. On the other hand, the continuum could contain a rich dynamic structure whose cognitive implications could not be conveyed in discrete form at all.

We do not know whether sophisticated thought could function without using a syntax and semantics along the lines of our recursively structured languages and formal systems – at least as a component of the hybrid mental representations discussed by active externalists (See Clark 2006). However, my response to the Davidsonian objection makes a case for the conceivability of sophisticated cognitive systems surpassing Wide Human interpretative capacities – i.e. those mediated by public symbol systems. If our imaginary swarm intelligence were a system of this type, then swarm thinking could be as practically inaccessible to humans as human thinking is for cats or dogs; if not inaccessible in principle to systems with the right computational resources.

These considerations support the speculative claim that posthuman lives might be interpretable in principle, but not by us. Moreover, even if the cognitive inaccessibility of posthumans is exaggerated in this claim, we have noted grounds for thinking that they could be so phenomenologically unlike us that public ethical systems of personal autonomy, good or virtue cannot be applied to them.


Agar, Nicholas (2010), Humanity’s End (MIT).

Chalmers, D. (2009), ‘The Singularity: A Philosophical Analysis’,, accessed 4 July, 2010.

Clark, A. (2003), Natural Born Cyborgs’. Oxford: Oxford University Press.

Clark, A. (2006), ‘Material Symbols’, Philosophical Psychology Vol. 19, No. 3, June 2006, 291–307.

Clark A. and D. Chalmers (1998), ‘The Extended Mind’, Analysis 58(1), 7-19.

Davidson, D., (1984,) ‘On the Very Idea of a Conceptual Scheme’, in D. Davidson, Inquiries into Meaning and Truth, (Clarenden press, Oxford) pp. 183-198.

____(1990). ‘The Structure and Content of Truth’, Journal of Philosophy, 87 (6), pp. 279-


MacLennan, B.J. (1995), ‘Continuous Formal Systems: A Unifying Model in Language and

Cognition’ in Proceedings of the IEEE Workshop on Architectures for Semiotic Modeling

and Situation Analysis in Large Complex Systems, Monterey,

Nietzsche, F. (1992). Beyond Good and Evil. In The Basic Writings of Nietzsche, edited

by Walter Kaufmann. The Modern Library.

Rawls, John (1980), ‘Kantian Constructivism in Moral Theory’, The Journal of Philosophy, 77(9), pp. 515-572.

Sommers, Tamler (2007). ‘The Illusion of Freedom Evolves’, in Distributed Cognition and the Will, David Spurrett, Harold Kincaid, Don Ross, Lynn Stephens (eds). MIT Press.

Vinge, V. (1993), ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, Accessed 24 April 2008.

Tagged with:

Human Enhancement and Wide Human Descent

On March 4, 2011, in Uncategorized, by enemyin1

Elsewhere I have summarized the position known as “speculative posthumanism” (SP):

Speculative posthumanists claim that descendants of current humans could cease to be human by virtue of a history of technical alteration (Roden 2010).

The SP schema defines the posthuman in terms of a process of technical alteration rather than more common terms such as ‘enhancement’ or ‘augmentation’. This is because SP is a metaphysical thesis and not an ethical one; though I argue that it has strong ethical implications for our technical praxis.

It is also peculiar because ‘descent’ is intended in a “wide” sense insofar as qualifying entities might include our biological descendants or beings resulting from purely technical mediators (e.g., artificial intelligences, synthetic life-forms, or uploaded minds). I shall explain the notion of wide descent further below and show how it implies a correlatively extensive notion of the human.

In this post I shall justify the use of a value neutral alteration relation, then my use of ‘wide’ as opposed to ‘narrow’ descent as well as the cognate concepts of wide and narrow humanity.

Value Neutrality

SP holds that a future history of a general type is metaphysically and technically possible. It does not imply that the posthuman would improve on the human state or that there would exist an accessible common perspective from which to evaluate human and posthuman lives.

Now, it could be objected that this ethical neutralization of the historical successor relation in the SP schema is overly cautious and loses traction on what could distinguish humans and hypothetical posthumans: namely, that posthumans would be distinguished by the possession of posthuman capacities, far in excess of the correlative powers exercised by any human without radical technological intervention. One of the most widely used formulations of the idea of the posthuman – that of transhumanist philosopher Nick Bostrom – is non-neutral. He defines a posthuman as a ‘being that has at least one posthuman capacity’ by which is meant ‘a central capacity greatly exceeding the maximum attainable by any current human being without recourse to new technological means’. Candidates for posthuman capacities include augmented ‘healthspan’, ‘cognition’ or emotional dispositions (Bostrom 2009).

While this is not a purely metaphysical conception of the posthuman it is, it might be argued, not so loaded as to beg important ethical questions against the philosophical critics of radical enhancement. As Allen Buchanan points out, ‘enhancement’ is a restrictedly value-laden notion insofar as enhancing a capacity implies making the capacity better or more effective but does not imply improving the welfare of its bearer (Buchanan 2009, 350). On the other hand, ‘alteration’ is so neutral that a technical process could count as posthuman engendering if it resulted in wide descendants of humans with capacities far below that of normal humans.

Surely, it can be objected, SP casts the definition of posthuman so wide that it fails to capture what some ethicists find disturbing about the programs for radical enhancement currently being promulgated by transhumanists such as Bostrom.

It is true that SP might at first seem to apply, by default, to prefrontal lobotomy patients or ‘posthuman’ babies bio-engineered to recreate slower-witted and less capable hominid ancestors of Homo sapiens. However, SP is a schematic formulation that requires a fuller explication of notions like ‘human’ and ‘non-human’. Once this supplemental account is in place the scope for trivial problem cases will be considerably reduced.

The advantage of the neutral historical successor relation in SP is that it doesn’t presuppose any common measure of human and posthuman capacities. Posthumans might conceivably result from a progressive enhancement of human cognitive capacities like working memory, mathematical or analogical reasoning – for example. Alternatively, our posthuman descendants might have capacities we currently have no concepts for while lacking some capacities that we can conceive of.

In a forthcoming article I consider how non-symbolic cognitive ‘workspaces’ that render language vestigial might mediate posthumans thinking and communication. It is not clear that the process leading to this would constitute an augmentation history in the usual sense – since according to my scenario it could involve both the loss of one central capacity (the capacity to have and express structured propositional attitudes) and the acquisition of an entirely new one. Yet it is arguable that it could engender beings so different from us in cognitive structure that they would be nonhuman (Roden Forthcoming; 2010).

The Borg in Star Trek are another imaginary variation of the theme of the ‘equivocal posthuman’ since individual members of the collective have their capacities for autonomy and practical reason drastically reduced compared to unassimilated humanoids. The Borg-collective, it is implied, has enormous cognitive powers, but these are not possessed by its members but emerge from the interactions of highly-networked drones, each of whom has had its humanoid capacities for reflection and agency suppressed. While the Borg seem like a conceivable form of posthuman life, they result from the almost complete inhibition of the kind of high-level cognitive and affective capacities that Bostrom treats as constitutive of the posthuman.

Such possibilities are thrown into greater relief if we count prospective artificial intelligences or synthetic life forms among our possible descendants.  As my schematic formulation of SP implies, this involves a wide notion of descent. I will elaborate the distinction between wide descent and narrow descent below in term of a distinction between a narrow biological conception of the human qua species and a wide conception of the human as a social-technical assemblage that includes narrow humans as functionally obligatory components. Whereas narrow humanity can be thought of as a natural kind, wide humanity is cultural construction with planetary reach.

Wide Descent

There are three justifications for introducing wide descent and the correlative notion of wide humanity:

1)    The appropriate concept of descent for SP is not a natural biological kind. Exclusive consideration natural biological descendants of humanity as candidates for humanity or posthumanity would be excessively restrictive. Prospective NBIC technologies may involve discrete bio-technical modifications of the reproductive process such as human cloning, the introduction of transgenic artificial genetic material (e.g. artificial chromosomes) or very radical variants of the reproductive process such as personality uploading or mind-cloning (Agar 2010). It follows that whatever notion of descent we substitute in SP should be wide enough to apply to discrete or to radical technical models of reproduction. When considering the lives of hypothetical posthuman descendants we must understood descent as relationship technically mediated to an arbitrary degree; not in terms of of the exchange of genetic material between gametes.

2)    Potential ancestors of posthumans could include, but are not restricted to, members of Homo sapiens (narrow humans). Conceivable posthumans may be related via technically mediated biological descent to narrow humans. However, it is equally conceivable that a singularity inducing a prospective AI could be a human artefact while AI+ or AI++ might be non-human artefacts. Alternatively, posthumans might be hybrids of biological and artificial intelligence or entirely synthetic life forms. Thus entities that might elicit our ethical concern with the posthuman could conceivably emerge via modified biological descent, recursive extension of AI technologies (involving human and/or non-human designers), quasi-biological descent from synthetic organisms, a convergence of the above, or ‘Other’ (some technogenetic process which has not been anticipated).

3)    Humans are the result of a technogenetic process of emergence and not a purely biological one. The most plausible historical analogy for the emergence of posthumans, as Virnor Vinge observes, is the ‘rise of humankind’, which differentiated humans from non-human primates (Vinge 1993). But some well-regarded positions in anthropology, philosophy and cognitive science claim that this involves the co-evolution of biological narrow humans, cultural entities such as languages, as well as techniques (Deacon 1997). This picture can be integrated into philosophical position known as active-externalism for which the distinction between bodily and extra-bodily processes is irrelevant when identifying cognitive processes such as thinking, memory or imagination. Active externalists like Andy Clark and David Chalmers argue from a principle of ‘parity’ between processes that go on in the head and any functionally equivalent process in the world beyond the skin sac. The parity principle implies that mental processes need not occur only in biological nervous systems but in the environments and tools of embodied thinkers. Given parity, spoken language, written documents, culturally embedded crafts, or electronic information systems can all count as vehicles of human mental processes where they make a cognitive contribution to them (Clark and Chalmers 1998; Clark 2003 and 2006). If we adopt active externalism as a philosophical and anthropological thesis we must view existing forms of human mentation such as mathematical thinking or critical theory as utilizing hybrid vehicles such as our biologically evolved (or co-evolved) brains and artefacts such as symbols, diagrams or computers.

Considerations 1-3 support (if non-conclusively) the claim that, were posthuman life to emerge, its emergence would be a discontinuity in the technogenetic process of recognisably human forms of life across this planet. This process has not been an exclusively biological one thus far (3) while considerations (1) and (2) suggest that biological processes could be highly technically mediated or largely transcended in the emergence of the posthuman. Becoming posthuman could involve a range of hybrid processes affecting both biological and extra-biological (technical, cultural) systems. Thus the relevant patient in the process would be a system with both biological (human) and non-biological (technical or cultural) components. I shall refer to this complex entity as the ‘wide human’ (WH). The WH is a complex entity containing many functionally complimentary components: individual narrow humans, languages, technological systems, legal systems, cities, corporations, religions, nations and geographical regions. It is spatially distributed and temporally evolving. Many of these temporal changes have been due to localised changes in its parts that disseminated through its geographical extent, drastically modifying its powers, which always emerge from the ways its parts interact – e.g. in the ways in which a technology like writing impacts on human cognitive and social practices.

We can now define wide human descent recursively:

An entity is a wide human descendant if it is the historical consequence of a replicative or productive process:

A) Occurring to a wide human descendant (recursive part).

B) Occurring to a part of the WH (where the ancestral part may be wholly biological, wholly technological or some combination of the two).


Agar, Nicholas (2010), Humanity’s End (MIT).

Bostrom, Nick (2008), ‘Why I Want to be a Posthuman When I Grow Up’, in Medical Enhancement and Posthumanity, eds. Bert Gordijn and Ruth Chadwick, pp. 107-137.

Buchanan, Allen (2009), ‘Moral Status and Human Enhancement’, Philosophy and Public Affairs 37(4), 346-381.

Chalmers, D. (2009), ‘The Singularity: A Philosophical Analysis’,, accessed 4 July, 2010.

Clark, A. (2003), Natural Born Cyborgs’. Oxford: Oxford University Press.

Clark, A. (2006), ‘Material Symbols’, Philosophical Psychology Vol. 19, No. 3, June 2006, 291–307.

Clark A. and D. Chalmers (1998), ‘The Extended Mind’, Analysis 58(1), 7-19.

Roden, David (2010), Deconstruction and excision in philosophical posthumanism. Journal of Evolution and Technology 21(1) (June): 27-36.

Roden, D (forthcoming), ‘Posthumanism and Instrumental Eliminativism’

Vinge, V. (1993), ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, Accessed 24 April 2008.