Wilfred Sellars (1974) argues that we should not construe claims about meanings as expressing a semantic relation between a verbal entity (a word, sentence, etc.) and a language-independent entity (abstract or concrete) but as claims about the functional roles of linguistic tokens. Thus we should construe

“chat” (in French) means cat


*chat*’s (in French) are  •cat•’s

“La neige est blanche” (in French) means Snow is white


*La neige est blanche*’s (in French) are  •Snow is white•’s

Where the expression “*chat*’s” is a metalinguistic distributive term that refers to all non-semantically individuated tokens with a certain shape or sound and the dot quotation expression “•cat•’s” uses the English token “cat” to exemplify its functional role in English. This expression says, in effect, that characters and letters of a certain shape in French have the same functional role as “cat” in English.

This device allows Sellars to construct a conception of meaning which is not committed to extra-linguistic abstract entities such as propositions. The meaning of s is not constituted by its relation to some abstract entity p but by its functional role in a given linguistic community (its role within its economy of language-entry, transition and exit rules). This is obviously an attractive notional device for nominalists who wish to rein in metaphysical commitments to non-linguistic abstracta. It reframes metaphysical issues about the existence of propositions or attributes as questions about the status of functional roles. Of course, functional roles are not metaphysically innocent or unproblematic. We can ask of the Sellarsian whether normative facts supervene on non-normative ones and what the consequences of this relationship are. If we can do no better than supervenience to describe their relationship, this will be a problematic outcome for many naturalists. A second question – not unrelated to the first – is that of how functional-inferential roles are individuated. Presumably, they cannot be individuated semantically if Sellars’ account of meaning is to be non-question-begging.

In this post I want to consider a puzzle that is related to the second problem. I have discussed an analogous issue with regard to Davidson’s interpretation-based semantics in “Radical Quotation and Real Repetition” (Roden 2004). I’m not confident about the metaphysical solution I proposed in that paper, but if something like it can begin to address the issue for Sellars account of functional classification this might help us think through the ontological underpinnings of interpretation.

The problem anatomized in “Radical Quotation” arose with regard to Davidsonian truth theories.

As Olaf Gjelsvik (1994) points out, the formal model used by Davidson presupposes that we can pick out bits of the language we want to interpret syntactically. Davidson’s account requires that radical interpreters have a stock of primitive terms referring to constituent expressions of the object language and that these can be assembled into ‘structural descriptions’ reflecting the syntactic composition of its sentences (Davidson 1984, p. 133). For example, an axiom in a truth theory for a language might say of a certain concatenation of three symbols that it is satisfied by a sequence of objects if the first member of the sequence is larger than the second member (i.e. giving it the extension of the predicate “….larger than….”.)

Why might this be a problem for Davidson? Well, it is a problem if we recall that Davidson’s use of model theory is designed to explicate an informal semantic notion: meaning. He proposes to do this by way of a notion he takes to be better understood: truth. Sellars’ approach (as I understand it) is procedural rather than model-theoretic. But it one might expect that it needs to meet analogous constraints (even if not the same ones).

So here’s where Gjelsvik thinks that Davidson’s account hits a bump.

If languages are individuated by the syntactic types composing their expressions – roughly, by the physical shape and structure of grammatical strings – the semantic properties of their sentences must be non-essential. It is thus possible for a sentence to have different semantic properties in different speech communities. But then a truth theory for one community can be made false if another uses tokens of these types differently. For example, on Twin Earth a language, Twinglish, might be spoken in which English-shaped predicates have contrary ‘meanings’.

The existence of Twinglish would be enough to falsify the T sentence:

‘Snow is white’ is True(E) if and only if Snow is white

Since it is the syntactic string referred to by ‘Snow in white’ which relativises a truth predicate, not the abbreviations “E” and “Tw”, there is nothing to distinguish it from a statement about a sentence of Twinglish:

‘Snow if white’ is True(Tw) if and only if Snow is white

If ‘. . . is white’ in Twinglish were a contrary of its English counterpart (meaning is green, say) the ‘only if’ would make it false.

According to Gjelsvik, the only alternative is to specify English sentences semantically. A formal theory of the Tarksian kind achieves this by defining a predicate that holds of all and only the true sentences of a language. But its theorems flow by stipulation and logical necessity. Davidsonian theories are supposed to express contingent, empirical claims about semantically uncharacterised sentences. Thus, Gjelsvik argues, a competent radical interpreter must assume that the world’s distribution of semantic properties is not of the Twinglish/English sort (Gjelsvik 1994, p. 34). The problem, here, is that this assumption utilizes pretheoretic concepts of subsentential meaning (using semantic concepts like “satisfaction” in the formalism of semantic theories is OK, according to Davidson, because they are part of the logical machinery of the theories. They are not explicatory as such)

It seems that a similar problem afflicts the metalinguistic statements that occur in Sellarsian functional role ascriptions.

*chat*’s (in French) are  •cat•’s

Would be false if Twin French speakers used *chat*’s differently to •cat•’s . Indeed it would be false if anyone, anywhere used *chat*’s in a way that ended up giving it a contrary functional role . Thus there must be other assumptions built into the ascription of metalinguistic types that are not evident in this formalism.

Well, it might seem that the Sellarsian is in a more favorable position than the Davidsonian here. For Gjelsvik, Davidson cannot constrain the scope of truth based theories without introducing meaning by the back door. But the Sellarsian only has to to claim that the distribution of functional roles is not of the silly type that would have *chat*’s acquiring contrary functional roles all over the place.

The problem with this fix is that there is absolutely nothing silly about contrary functional roles. As Robert Brandom’s inferentialist account implies, a term can acquire different functional roles where people have contrary beliefs. We would expect dancing inferential roles to be par for the course within any speech community. In any event, Sellars cannot preclude rampant homonymy without making their functional roles essential to interpretants in metalinguistic sortal sentences. But this would also render them trivial.

In consequence  many metalinguistic sortal claims are falsified by inferential nuances within and between language communities, while it would be perfectly conceivable that there are no true ML sortals at all (allowing for sufficient homonymic variability across speech communities).

It is not clear to me where this would leave Sellars’ metaphysics of meaning. For example, can we build in a tacit reference to a given speech community which can be expected to exhibit the uniformities described by metalinguistic sortal claims? Maybe, but as well as being questionable for the Davidsonian/Brandomian reasons mentioned above, it also seems to require an explicit notion of reference. If we cannot plausibly restrict the scope of ML sortals in such a way, however, it would seem to follow that most or many ML sortal claims are false (thus there are no ML functional types, or very few) Thus the claim that the meaning of a term is its functional role would have to be judged false as well.

My solution to the problem that Davidson faces is to treat metalinguistic statements in a constructionist spirit. Syntactical types – accordingly – are not contingent owners of functional roles. They are individuated by functional role. So English “white” and Twinglish “white” are distinct characters and not the same character used in different ways. The problem, then, is to account for the empirical, contingent character of claims like

*chat*’s are  •cat•’s

For on this account *chat* is not part of a “language” (like French) in a conventional sense but of a local idiom constructed purely for purposes of interpretation. For reasons similar to those discussed by Davidson in “A Nice Derangement of Epitaphs” we are no longer supposing that the notion of a language is the basic one here. The issue, then, is what is the function of an interpretant such as *chat* in this sortal statement?

My solution circa 2004 was to say that its function is to repeat the utterances or parts of utterances used by native speakers of languages under interpretation (we return to the primal scene of radical interpretation, as it were).  *La neige est blanche* is designed to quote expressions in one idiom in another idiom (that of the interpreting discourse). So

*La neige est blanche*’s (in French) are  •Snow is white•’s

refers to a set of historically instantiated utterance events by repeating them. Thus there must be a historical-causal relation of some kind between the interpreter and users of the interpreted idiom which can explain its purchase on these (rather, say, than on users of an orthographically identical language on Twin Earth).

The ontological basis of this quotation is not exemplification of a common semantic type. It is an ontologically primitive relation of repetition or “iteration” (to use Derrideanese) which operates transversely between languages and language communities (non-language-relative repetition). Some events, it must be assumed, just repeat other events without having to fall under a common description. The worry, now, is that the interpreted terms in ML sortings are being used as instances of the items they repeat rather being merely structural descriptions or examples of sign-designs. They are being used, so to speak, to refer to themselves. But if this is right, then the very act of interpreting them constitutes a variation in functional role. It is also a function that cannot obviously be expressed in inferential terms.

Finally, if the interpretants are essentially repeatabilia, then it is part of their job description (so to speak) that that can always accrue functional roles that differ from the ones they have had (otherwise interpretation would have no text). But then it cannot be inappropriate to use them in these “deviant” ways. Thus there no longer seems to be room for the normative facts which (supposedly) undergird the functionalist account.


Davidson, Donald (1984). Inquiries into Truth and Interpretation (Oxford: Clarendon Press).

____1986. ‘A Nice Derangement of Epitaphs’, in Ernest LePore (ed.) Truth and
Interpretation: Perspectives on the Philosophy of Donald Davidson (Oxford: Blackwell).

Derrida, Jaqcues. 1988. Limited Inc. Samuel Weber and Jeffrey Mehlman (trans.) (Evanston Ill.: Northwestern University Press).

Gjelsvik, Olav. 1994. ‘Davidson’s Use of Truth in Accounting for Meaning’, in Language, Mind and Epistemology: on Donald Davidson’s Philosophy, Gerhard Preyer, Frank Siebelt and Alexander Ul?g (eds.) (Dordrecht: Kluwer), pp. 21–43.

Lewis, Kevin. 2013. ”Carnap, Quine and Sellars on Abstract Entities”,  https://www.academia.edu/2364977/Carnap_Quine_and_Sellars_on_Abstract_Entities (Accessed 12-7-14)

Sellars, W. (1974). Meaning as Functional Classification (A Perspective on the Relation of Syntax to Semantics). Synthese, (3/4). 417.


Tagged with:

No Future? Catherine Malabou on the Humanities

On February 19, 2014, in Uncategorized, by enemyin1

Catherine Malabou has an intriguing piece on the vexed question of the relationship between the “humanities” and science in the journal Transeuropeennes here.

It is dominated by a clear and subtle reading of Kant, Foucault and Derrida’s discussion of the meaning of Enlightenment and modernity. Malabou argues that the latter thinkers attempt to escape Kantian assumptions about human invariance by identifying the humanities with “plasticity itself”. The Humanities need not style themselves in terms of some invariant essence of humanity. They can be understood as a site of transformation and “deconstruction” as such.  Thus for Derrida in “University Without Condition”, the task of the humanities is:

the deconstruction of « what is proper to man » or to humanism. The transgression of the transcendental implies that the very notion of limit or frontier will proceed from a contingent, that is, historical, mutable, and changing deconstruction of the frontier of the « proper ».

Where, as for Foucault, the deconstruction of the human involves exhibiting its historical conditions of possibility and experimenting with these by, for example, thinking about “our ways of being, thinking, the relation to authority, relations between the sexes, the way in which we perceive insanity or illness “.

This analysis might suggest that the Humanities have little to fear from technological and scientific transformations of humans bodies or minds; they are just the setting in which the implications of these alterations are hammered out.

This line of thought reminds me of a revealingly bad argument produced by Andy Clark in his Natural Born Cyborgs:

The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game . . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, 142).

This is obviously broken-backed: that earlier bootstrapping didn’t produce posthumans doesn’t entail  that future ones won’t. Even if humans are essentially self-modifying it doesn’t follow that any prospective self-modifying entity is human.

The same problem afflicts Foucault and Derrida’s attempts to hollow out a reservation for humanities scholars by identifying them with the promulgation of transgression or deconstruction. Identifying the humanities with plasticity as such throws the portals of possibility so wide that it can only refer to an abstract possibility space whose contents and topology remains closed to us. If, with Malabou, we allow that some of these transgressions will operate on the material substrate of life, then we cannot assume that its future configurations will resemble human communities or human thinkers – thinkers concerned with topics like sex, work and death for example.

Malabou concludes with the suggestion that Foucault and Derrida fail to confront a quite different problem. They do not provide a historical explanation of the possibility of transformations of life and mind to which they refer:

They both speak of historical transformations of criticism without specifying them. I think that the event that made the plastic change of plasticity possible was for a major part the discovery of a still unheard of plasticity in the middle of the XXth century, and that has become visible and obvious only recently, i.e. the plasticity of the brain that worked in a way behind continental philosophy’s back. The transformation of the transcendental into a plastic material did not come from within the Humanities. It came precisely from the outside of the Humanities, with again, the notion of neural plasticity. I am not saying that the plasticity of the human as to be reduced to a series of neural patterns, nor that the future of the humanities consists in their becoming scientific, even if neuroscience tends to overpower the fields of human sciences (let’s think of neurolinguistics, neuropsychoanalysis, neuroaesthetics, or of neurophilosophy), I only say that the Humanities had not for the moment taken into account the fact that the brain is the only organ that grows, develops and maintains itself in changing itself, in transforming constantly its own structure and shape. We may evoke on that point a book by Norman Doidge, The Brain that changes itself. Doidge shows that this changing, self-fashioning organ is compelling us to elaborate new paradigms of transformation.

I’m happy to concede that the brain is a special case of biological plasticity, but, as Eileen Joy notes elsewhere, the suggestion that the humanities have been out of touch with scientific work on the brain is unmotivated. The engagement between the humanities (or philosophy, at least) and neuroscience already includes work as diverse as Paul and Patricia Churchland’s work on neurophilosophy and Derrida’s early writings on Freud’s Scientific Project.

I’m also puzzled by the suggestion that we need to preserve a place for transcendental thinking at all here. Our posthuman predicament consists in the realization that we are alterable configurations of matter and that our powers of self-alteration are changing in ways that put the future of human thought and communal life in doubt. This is not a transcendental claim. It’s a truistic generalisation which tells us little about the cosmic fate of an ill-assorted grab bag of  academic disciplines.


Clark, A. 2003. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press.







Braidotti’s Vital Posthumanism

On October 21, 2013, in Uncategorized, by enemyin1

Bt-toxin-crystalsCritical Posthumanists argue that the idea of a universal human nature has lost its capacity to support our moral and epistemological commitments. The sources of this loss of foundational status are multiple according to writers like Donna Haraway, Katherine Hayles (1999), Neil Badmington (2003), Claire Colebrook and Rosi Braidotti. They include post-Darwinian naturalizations of life and mind that theoretically level differences between living and machinic systems and the more intimate ways of enmeshing living entities in systems of control and exploitation that flow from the new life and cognitive sciences. Latterly, writers such as Braidotti and Colebrook have argued that a politics oriented purely towards the rights and welfare of humans is incapable of addressing issues such as climate change or ecological depletion in the anthropocene era in which humans “have become a geological force capable of affecting all life on this planet” (Braidotti 2013: 66).

On the surface, this seems like a hyperbolic claim. If current global problems are a consequence of human regulation or mismanagement, then their solution will surely require human political and technological agency and institutions.

But let’s just assume that there is something to the critical posthumanist’s deconstruction of the human subject and that, in consequence, we can no longer assume that the welfare and agency of human subjects should be the exclusive goal of politics. If this is right, then critical posthumanism needs to do more than pick over the vanishing traces of the human in philosophy, literature and art. It requires an ethics that is capable of formulating the options open to some appropriately capacious political constituency in our supposedly post-anthropocentric age.

Braidotti’s recent work The Posthuman is an attempt to formulate such an ethics. Braidotti acknowledges and accepts the levelling of the status of human subjectivity implied by developments in cognitive science and biology and the “analytic posthumanism” that falls out of this new ontological vision. However, she is impatient with what she perceives as a disabling vacillation and neutrality that easily follows from junking of human subject as the arbiter of the right and the good. She argues that a posthuman ethics and politics need to retain the idea of political subjectivity; an agency capable of constructing new forms of ethical community and experimenting with new modes of being:

In my view, a focus on subjectivity is necessary because this notion enables us to string together issues that are currently scattered across a number of domains. For instance, issues such as norms and values, forms of community bonding and social belonging as well as questions of political governance both assume and require a notion of the subject.

However, according to Braidotti, this is no longer the classical self-legislating subject of Kantian humanism. It is vital, polyvalent connection-maker constituted “in and by multiplicity” – by “multiple belongings”:

The relational capacity of the posthuman subject is not confined within our species, but it includes all non-anthropocentric elements. Living matter – including the flesh – intelligent and self-organizing but it is precisely because it is not disconnected from the rest of organic life.

‘Life’, far from being codified as the exclusive property or unalienable right of one species, the human, over all others or of being sacralised as a pre-established given, is posited as process, interactive and open ended. This vitalist approach to living matter displaces the boundary between the portion of life – both organic and discursive – that has traditionally been reserved for anthropos, that is to say bios, and the wider scope of animal and nonhuman life also known as zoe (Braidotti 2012: 60).

Thus posthuman subjectivity, for Braidotti, is not human but a tendency inherent in human and nonhuman living systems alike to affiliate with other living systems to form new functional assemblages. Clearly, not everything has the capacity to perform every function. Nonetheless, living systems can be co-opted by other systems for functions “God” never intended and Mother Nature never designed them for. As Haraway put it:  ‘No objects, spaces, or bodies are sacred in themselves; any component can be interfaced with any other if the proper standard, the proper code, can be constructed for processing signals in a common language’ (Haraway 1989: 187). There are no natural limits or functions for bodies or their parts, merely patterns of connection and operation that do not fall apart all at once.

Zoe . . . is the transversal force that cuts across and reconnects previously segregated species, categories and domains. Zoe-centered egalitarianism is, for me, the core of the post-anthropocentric turn: it is a materialist, secular, grounded and unsentimental response to the opportunistic trans-species commodification of Life that is the logic of advanced capitalism.

Of course, if anything can be co-opted for any function that its powers can sustain, one might ask how zoe can support a critique of advanced capitalism which, as Braidotti concedes, produces a form of the “posthuman” by radically disrupting the boundaries between humans, animals, species and technique. What could be greater expression of the zoe’s transversal potential than, say, Monsanto’s transgenic cotton Bollgard II? Bollgard II contains genes from the soil bacterium Bacillus thuringiensis that produce a toxin deadly to pests such as bollworm. Unless we believe that there is some Telos inherent to thuringiensis or to cotton that makes such transversal crossings aberrant – which Braidotti clearly does not – there appears to be no zoe-eyed perspective that could warrant her objection. Monsanto’s genetic engineers are just sensibly utilizing possibilities for connection that are already afforded by living systems but which cannot be realized without technological mediation (here via gene transfer technology). If the genes responsible for producing the toxin Bt in thuringiensis did not work in cotton and increase yields it would presumably not be the type used by the majority of farmers today (Ronald 2013).

Cognitive and biological capitalists like Google and Monsanto seem to incarnate the tendencies of zoe – conceived as a generalized possibility of connection – as much as the” not-for-profit” cyborg experimenters like Kevin Warwick or the publicly funded creators of HTML, Dolly the Sheep and Golden Rice. Doesn’t Google show us what a search engine can do?

We could object to Monsanto’s activities on the grounds that it has invidious social consequences or on the grounds that all technologies should be socially rather than corporately controlled. Neither of these arguments are obviously grounded in posthumanism or “zoe-centricism”  – Marxist humanists would presumably agree with the latter claim, for example.

However, we can find the traces of a zoe-centered argument in Deleuzean ethics explored in the essay “The Ethics of Becoming Imperceptible” (Braidotti 2006). This argues for an ethics oriented towards enabling entities to actualize their powers to their fullest “sustainable” extent. A becoming or actualization of power is sustainable if the assemblage or agency exercising it can do so without “destroying” the systems that makes its exercise possible. Thus an affirmative posthuman ethics follows Nietzsche in making it possible for subjects to exercise their powers to the edge but not beyond, where that exercise falters or where the system exercising it falls apart.

To live intensely and be alive to the nth degree pushes us to the extreme edge of mortality. This has implications for the question of the limits, which are in-built in the very embodied and embedded structure of the subject. The limits are those of one’s endurance – in the double sense of lasting in time and bearing the pain of confronting ‘Life” as zoe. The ethical subject is one that can bear this confrontation, cracking up a bit but without having its physical or affective intensity destroyed by it. Ethics consists in re-working the pain into threshold of sustainability, when and if possible: cracking, but holding it, still.

So Capitalism can be criticized from the zoe-centric position if it constrains powers that could be more fully realized in a different system of social organization. For Braidotti, the capitalist posthuman is constrained by the demands of possessive individualism and accumulation.

The perversity of advanced capitalism, and its undeniable success, consists in reattaching the potential for experimentation with new subject formations back to an overinflated notion of possessive individualism . . ., tied  to the profit principle. This is precisely the opposite direction from the non-profit experimentations with intensity, which I defend in my theory of posthuman subjectivity. The opportunistic political economy of bio-genetic capitalism turns Life/zoe – that is to say human and non-human intelligent matter – into a commodity for trade and profit (Braidotti 2013: 60-61).

Thus she supports “non-profit” experiments with contemporary subjectivity that show what “contemporary, biotechnologically mediated bodies are capable of doing” while resisting the neo-liberal appropriation of living entities as tradable commodities.

Whether the constraint claim is true depends on whether an independent non-capitalist posthuman (in Braidotti’s sense of the term) is possible or whether significant posthuman experimentation – particularly those involving sophisticated technologies like AI or Brain Computer Interfaces – will depend on the continued existence of a global capitalist technical system to support it. I admit to being agnostic about this. While modern technologies such as gene transfer do not seem essentially capitalist, there is little evidence to date that a noncapitalist system could develop them or their concomitant forms of hybridized “posthuman” more prolifically.

Nonetheless, there seems to be a significant ethical claim at issue here that can be used independently of its applicability to the critique of contemporary capitalism.

For example, I have recently argued for an overlap or convergence between critical posthumanism and Speculative Posthumanism: the claim that descendants of current humans could cease to be human by virtue of a history of technical augmentation (SP). Braidotti’s ethics of sustainability is pertinent here because SP in its strong form is also post-anthropocentric – it denies that posthuman possibility is structured a priori by human modes of thought or discourse – and because it defines the posthuman in terms of its power to escape from a socio-technical system organized around human-dependent ends (Roden 2012). The technological offspring described by SP will need to be functionally autonomous insofar as they will have to develop their own ends or modes of existence outside or beyond the human space of ends. Reaching “posthuman escape velocity” will require the cultivation and expression of powers in ways that are sustainable for such entities. This presupposes, of course, that we can have a conception of a subject or agent that is grounded in their embodied capacities or powers rather than general principles applicable to human agency. Understanding its ethical valence thus requires an affirmative conception of these powers that is not dependent on overhanging  anthropocentric ideas such as moral autonomy. Braidotti’s ethics of sustainability thus suggests some potentially viable terms of reference for formulating an ethics of becoming posthuman in the speculative sense.


Badmington, N. (2003) ‘Theorizing Posthumanism’, Cultural Critique 53 (Winter): 10-27.

Braidotti, R (2006), ‘The Ethics of Becoming Imperceptible”, in Deleuze and Philosophy, ed. Constantin Boundas, Edinburgh University Press: Edinburgh, 2006, pp. 133-159.

Braidotti, R (2013), The Posthuman, Cambridge: Polity Press.

Colebrook, Claire 2012a.), “A Globe of One’s Own: In Praise of the Flat Earth.” Substance: A Review of Theory & Literary Criticism 41 (1): 30–39.

Colebrook, Claire (2012b.), “Not Symbiosis, Not Now: Why Anthropogenic Change Is Not Really Human.” Oxford Lit Review 34 (2): 185–209.

Haraway, Donna (1989), ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’. Coming to Terms, Elizabeth Weed (ed.), London: Routledge, 173-204.

Hayles, K. N. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Roden, D. (2010). ‘Deconstruction and excision in philosophical posthumanism’. The Journal of Evolution & Technology, 21(1), 27-36.

Roden, D. (2012). ‘The Disconnection Thesis’. In Singularity Hypotheses (pp. 281-298). Springer Berlin Heidelberg.

Roden, D. (2013). ‘Nature’s Dark domain: an argument for a naturalized phenomenology’. Royal Institute of Philosophy Supplement, 72, 169-188.

Roden, R (2014). Posthuman Life: philosophy at the edge of the human. Acumen Publishing.


There’s a very interesting discussion of the merits of Marxism and an Anarchist-Green politics set out in John Zerzan’s book Twilight of the Machines (which I’ll admit to downloading, not reading!) over at the (Dis)loyal Opposition to Modernity. As I understand from the gloss in the DOM post, Zerzan views technology as inherently alienating and destructive and proposes its relinquishment in the interest of human autonomy and the planet (this gloss may need nuancing, obviously!).

Unlike some technophilic left-liberals, I treat relinquishment as a serious moral response to the incompatibility of technical modernity and political transparency. This is because modern technological systems are post-geographic and post-cultural – that is, any invention or device can be replicated in multiple contexts with inherently unpredictable results on the rest of the system (think, for example, of the global impact of Tim Berners Lee’s invention of hypertext for cabal of physicists at CERN). If modern technological systems are inherently unpredictable, then they are inherently uncontrollable. So even if we replace capitalist forms of ownership with a more rational way of allocating resources we’ll still be “living on this thing like fleas on a cat” (to quote Dr Gaius Baltar,)

The only options to verminous status I can conceive are relinquishment or a kind of anti-technological theocracy that artificially restricts the dynamism of self-augmenting technological systems (SATS). Both solutions are arguably based on a self-defeating ideal of sovereignty or autonomy. As Martin Hägglund argues via Derrida, there is no decision without the spacing between now and then – meaning that we can’t live without chancing the worst. The Anarcho-Green is thus a wrong-headed, philosophically naïve death-obsessive but, as fantasies of self-immolation go, his a relatively intelligible one.




Metaphysical Realism (MR) is not one claim but, Putnam argues, a package of interrelated claims about the mind-world relationship. The key components of MR are 1) the independence thesis; 2) the correspondence thesis; 3) the uniqueness thesis. The independence thesis states that there is a fixed totality of mind independent objects (the world). The correspondence thesis states that there are determinate reference relations between bits of language or mental representations and the bits of the world to which they refer. The uniqueness theory states that there is a theory whose sentences correctly describe the states of all these objects. This implies a singular correspondence between the terms belonging to this theory and the objects and properties that they refer to (Putnam 1981, 49). As a package it is cohesive. One needs mind-independent properties and objects as objects/properties to correspond to. There must be some unique total fact about these objects if there is to be one correct way in which a theory can represent this total fact.

We can imagine this theory being expressed in a language consisting of names like “Fido” and “Shlomo”, property and relation terms like “…is a dog”, “…is a cat” or “…is father of…”, as well as all the quantificational apparatus that we need to make multiple generalizations: e.g. “There is at least one thing that is a cat” or “All dogs hate at least one cat”. Of course, since this is the one true theory we might expect it to contain enough mathematics (e.g. set theory) to express the true laws of physics, the true laws of chemistry, etc. However, for this to be one true theory each true sentence that we can derive from it – e.g. “Shlomo is a cat” – must hook up with the world in the right way. For example, “Shlomo” must determinately refer to a unique object and this object must have the property referred to by “…is a cat” (this property might be the set of all cats or it might be universal property of catness – again, depending on the metaphysical facts). [i]

An assignment of referents to terms along these lines is called an interpretation function. The set of objects, properties, relations, etc. that are matched up to terms by a particular interpretation function is called a model. Putnam’s account of metaphysical realism then, in effect says that metaphysical realism is the claim that there is a unique description of the world hooked up to that world by a single true interpretation function (matching names to objects, property terms to properties, etc.).

The uniqueness of the corresponding interpretation function is crucial here because if there were more than one good way of interpreting the terms of the one true theory, there would be alternative theories, each one corresponding to a different interpretation function for the constituent terms of its language.[ii] In that case, there would not be one correct description of the world. But if realism comes down to a commitment to there being a God’s eye view of the world – a uniquely true theory which picks out the way the world is – then realism would have to be rejected.

What is the virtue that makes the one true theory unique? Well, to count as the one true theory, it would, at minimum, need to satisfy all the “operational constraints” that ideally rational inquirers would impose on such a theory. For example, if one imagines science progressing to an ideal limit at which no improvements can be made in its explanatory power, coherence, elegance or simplicity, then the one true theory would have to be as acceptable to ideally rational enquirers as that theory (Putnam 1981, 30).

Putnam’s argument against realism is that given a theory that satisfies this ideal of operational virtue there would always be a second equally good theory that can be constructed by giving the sentences of the first different interpretations. Further, he argues, that there is nothing beyond operational virtue that might distinguish the first theory from the second because there are no mind-independent semantic facts that specify the right interpretation. If this is right, then there cannot be a one true theory that completely describes the world.

The argument begins with a theorem of model theory.[iii] The model-theoretic notion of a theory is that it is a language L under a given interpretation function I which maps the terms of L onto a universe of objects and properties (properties are treated as sets of objects. For example, the relation of fatherhood would be the set of all ordered pairs, the second member of which is the son of the first member.). The theorem states that for every theory T1 (consisting of a language L under interpretation I) it is possible to gerrymander a function J that interprets each term L “in violently different ways, each of them compatible with the requirement that the truth value of each sentence in each possible world be the one specified” (Putnam 1981, 33, 217-218). The basic idea is that under these “permutated” interpretation functions, the sentences that come out true in T1 in a given possible world would come out true in T2 in that world.[iv]  The two theories T1 and T2 would not differ in assignments of truth values to sentences in any possible world and – being expressed in the same words – would have exactly the same structure, so each would be as simple and as elegant as the other.

However, metaphysical realism is committed to the view that even an ideally confirmed and simple theory could be comprehensively false because truth is “radically non-epistemic” – that is truth is a matter of whether a sentence corresponds with the world, not of how well confirmed that sentence is. This is, of course, the position that Descartes is committed to in his Evil Demon thought experiment. The semantic facts that give my beliefs reference to a possible world are unaffected by the existence or nature of the mind-external world. Putnam’s version of this realist conceit is the science fictional notion that we might be brains in vats being fed simulated experiences by a mad neurophysiologist. Thus, according to metaphysical realism, even a theory T1 that is operationally ideal and irrefutable for vat brains could be still be false (Putnam 1978, 125). However, unlike Descartes, Putnam argues that this conceit is incoherent. If T1 is consistent it is possible to find an interpretation function that maps the language of T1 onto a model containing elements of whatever world happens to exist – even if that is vat-world. So under this interpretation T1 comes out true, not false (Putnam 1978, 126).

It can be objected that this would not be the interpretation “intended” by the vat brains (or the ensorcelled Descartes, if one prefers). But T1 would be operationally as good as it gets for the envatted. It would inform their practices of inference and prediction in just the same way that it would were it true. There seems to be nothing beyond these practices of judgment and inference that could fix the meaning of terms like “cat” or “dog” – though these are clearly not sufficient to give uniquely determinate meaning.

Some philosophers have argued that uniquely intended interpretations can be imposed by our contents of our beliefs or ideas. For example, maybe my idea of a cat and actual cats shares a mysterious essence of catness which “exists both in the thing and (minus the latter) in our minds” which, in turn, fixes the reference of property terms like “cat” (Putnam 1983, 206; 1981, 59-61). Putnam argues that this response makes recourse to a magic language of self-interpreting mental-signs: it states, in effect, that there are mental representations that just mean what they mean irrespective of how the world is or of their role in inference. Here Putnam is in agreement with the French deconstructionist, Jacques Derrida. For Derrida, as for Putnam, a sign is a mark that acquires it meaning by being used differently from other signs, whether the mark is spoken, written or occurs in the brain or in some purely mental medium (if such a thing exits). A particular inscription or brain state or sound only counts as a sign insofar as it functions or is used differently from other signs. The obvious candidate for “use” and “function” here are the roles of signs in inferences and in interpretative practices. But these, as has been seen, are unable to fix a unique model for T1.

So it does not matter whether we are talking about mental signs or signs in language: they derive meaning from their differential functioning. For Derrida this has the complicating consequence that any mark must be “iterable”: i.e. can be lifted from its standard contexts and grafted into new ones, thereby acquiring different functions (Derrida 1988, 9-10). However, for our purposes, the important consequence is that appealing to “inner” or mental signs to fix the intended meanings of T1 seems to presents us with exactly the same problem of indeterminacy as we had with T1 itself (Putnam 1978, 127; 1983, 207).

If this is right, then the realist claim that an ideally confirmed theory could be false just comes down to the claim that there are self-standing minds or self-standing languages whose meanings are fixed regardless of how things lie in the world. But if Putnam is right, there are no self-standing meanings in this sense. Descartes thought experiment in either its 17th Century Demonic version or its modern Neuro or Simulationist versions is incoherent.

But, Putnam argues, this means that the idea that truth is non-epistemic is incoherent. To suppose that our beliefs could all be false, no matter how well they conform to experience and canons of enquiry makes no sense (Putnam 1978, 128-130).  And (assuming the soundness of Putnam’s model theoretic argument) this also means that the idea of a privileged, God’s eye view of the world – MR -is incoherent. There is no single theory that uniquely corresponds to the nature of a mind-independent world because there are always other interpretation functions with which to generate new theories with the same degree of epistemic virtue. Thus the assumption that the world has an intrinsic nature independently of how it is construed from the standpoint of a particular theory or form of life is as much an ungrounded superstition as the notion of substantial forms.

Rather than aspiring to the idealized God’s eye view of metaphysical realism, Putnam argues that we should recognize that truth, reference and objectivity are properties that our claims and experiences have in virtue of “our” practices of inference, confirmation and observation. To say that the sentence “’Cow’ refers to cows” is true is not to make a claim about some determinate relationship – reference – between word and world but to say something about the situations in which a competent speaker of English should use the term ‘cow’ (Putnam 1978, 128, 136).  From within the shared practices of English speaker, this fact just shows up as an a priori truth. But this (as Kant also claimed) does not reflect some impossible insight into the mind-independent nature of things, but simply reflects our acculturated understanding of what is appropriate to say, when (137). Even the metaphysical structure of the world is – according to this view – a perspective that reflects the background understanding and interests of creatures who share the relevant concerns and practices.  Reference is, as Putnam puts it elsewhere, a “matter of interpretation” which presupposes “a sophisticated understanding of the way words are used by the community whose words one is interpreting” (Putnam 1995, 119). So, by the same token, there can be no ready-made totality of objects of reference since (again) this presupposes the discredited God’s eye view:

[From] my “internal realist” perspective at least, there is no such totality as All the Objects There are, inside or outside science. “Object” itself has many uses, and as we creatively invent new uses of words, we find that we can speak of “objects that were not “values of any variable” in any language we previous spoke (The invention of “set theory” by Cantor is a good example of this.) (Putnam 1995, 120)


Derrida, Jacques (1988). Limited Inc. Samuel Weber and Jeffrey Mehlman (trans.),Evanston Ill.:

Northwestern University Press.

Putnam, Hilary (1978). Meaning and the Moral Sciences. Routledge & K. Paul.

Putnam, Hilary (1981). Reason, Truth, and History. Cambridge University Press.

Putnam, Hilary (1983). Realism and Reason: Philosophical Papers Volume 3. Cambridge University Press.


[i] We can summarise this state of affairs as follows:


“Fido”  —> the object Fido

“Shlomo” —> the object Shlomo

“…is a cat…” —> property of cattiness

“…is a dog…” —> property of dogginess

“…is the father of…” —> relation of fatherhood


[ii] For example, we can imagine a deviant interpretation function that maps up terms in the “wrong” way:


“Fido” —> the object Fido’s shadow

“Shlomo” —> the object Shlomo’s shadow

“…is a cat…” —> property of being the shadow of a cat

“…is a dog…” —> property of being the shadow of a dog

“…is the father of…” —> relation of fatherhood


[iii] The branch of mathematical logic that examines the formal relationships between languages and the models assigned to them under interpretation functions.

[iv] Suppose T1 has an interpretation function I that includes the first set of assignments given above (“Fido” refers to Fido, “Shlomo” refers to Shlomo, etc.) whereas T2’s interpretation function has the second. Thus the sentence “Shlomo is a cat” says that the object Shlomo is a cat in T1 whereas the same sentence say that a particular shadow is the shadow of a cat, which also happens to be true.

Derrida and Syntax

On March 1, 2012, in Uncategorized, by enemyin1



There’s a fascinating post over at M-Phi, asking whether Godel’s use of numbers to code formal relations of derivability in his proof of the incompleteness of arithmetic can be generalized to logical systems which don’t “contain” arithmetic. Not coincidentally, it includes a link to an interesting paper by Paul Livingstone on Derrida, Priest and Godel which looks at the role of syntax in marking the undecidable elements of texts in deconstruction. New APPS will be hosting a symposium on the paper next week.

Derrida’s reading of Stéphane Mallarmé’s poem Mimique is central to Livingstone’s discussion, but as an aid for those who are not familiar with either, I’ve posted a brief commentary on it from my dusty PhD thesis (It was entitled: The Metaphysics of the Deconstructive Text, if you have to know!).


Rodolphe Gasché compares Derrida’s philosophical project with Husserl’s program for a logical grammar.  Logical grammar, in its Husserlian sense, is only derivatively concerned with the structure of language.  Syntactic distinctions between linguistic elements are of interest to logical grammar to the extent that they are indicative of the a priori laws governing the composition of intentional contents in cognitive or expressive acts. For example, in Logical Investigation IV Husserl distinguishes between complete, or ‘categorematic’, expressions which express a complete propositional content or a singular presentation, and non-independent, or ‘syncategorematic’, expressions whose senses contribute systematically to independent meanings but which do not express thoughts or refer to objects.  Examples of syncategoremata are: ‘but’, ‘between’, ‘The sister of…’, ‘…implies…’. Among the a priori laws that Husserl has in mind would be that a syncategoreme cannot concatenated with a definite article.[1]

The parallel between Husserl and Derrida, according to Gasché, consists in a common concern with formal or, in Derrida’s case, quasi-formal structures which account for the articulation of elements into discursive wholes.  For Derrida, as for Gasché, Husserl’s project is limited by being oriented by semantics: in particular, the values of truth or reference.   Thus sentences that are necessarily false, such as ‘The circle is square’, are meaningful, but, according to Derrida, are presumed meaningful because their grammatical form ‘tolerates the possibility of relation with [an] object’.[2]  Derrida’s project, according to Gasché, extends formality beyond the domain of semantics or logic, to structures which resist either phenomenological or semantic interpretation.[3]  He illustrates the quasi-syntactical character of différance, trace and the other infrastructures with reference to Derrida’s reading of part of Mallarmé’s prose poem, Mimique,  in ‘The Double Session’:

La scène n’illustre que l’idée, pas une action effective, dans un hymen (d’où procède le Rêve), vicieux mais sacré, entre le désir et l’accomplissement, la perpétration et son souvenir: ici devançant, là remémorant, au futur, au passé, sous une apparence fausse de présent...[4]

Though hymen contributes to the imagistic content of the poem, Derrida suggests that its structural role is as a syntactic place holder which resists onto-grammatical categorization.  Although formally a noun – and thus a categoreme in Husserlian terms – Derrida argues that the role of hymen in the poem is largely independent of its meaning but is, rather, determined by its relation to entre, ‘between’: ‘Through the “hymen” one can remark only what the place of the word entre already marks and would mark even if the world “hymen” were not there.  If we replaced “hymen” by “marriage” or “crime”, “identity” or “difference”, etc. the effect would be the same, the only loss being a certain economic condensation or accumulation’.[5]   The putatively independent hymen is thus textually dependent upon the nominally syncategorematic entre, an element whose ‘signification’ is itself dependent upon its placement.  In addition to its grammatical equivocation, hymen is also a ‘between’ of temporal phases of action and cognition (entre le désir et l’accomplissement, la perpétration et son souvenir: ici devançant, là remémorant, au futur, au passé) without being temporally situated (sous une apparence fausse de présent).  The indeterminacy of this locus (which, for Derrida, cannot without violence be interpreted as ‘eternal’) can nonetheless be articulated with respect to more or less stable lexical values (devançant, re-mémorant, futur, passé, présent, etc.).

Mimique thus demonstrates, in microcosm, the process by which language extracts a surplus of meaning without being informed by a prior relation to some domain of objects. This is the sense in which, for Gasché, Derrida’s investigations can be considered as a generalization of Husserl’s project:

The system of these infrastructures as one of syntactically re-marked syncategoremata is a system that escapes all phenomenologization as such; it constantly disappears and withdraws from all possible presentation.  In privileging the syntactical in the sense in which I have been developing it – suspended from semantic subject matters of whatever sort – the general system spells out the prelogical conditions of logic, thus reinscribing logic, together with its implications of presence and evident meaning, into a series of linguistic functions of which the logical is only one among others. [6]



D               Dissemination, Barbara Johnson (trans.),

(1972; London: Athlone Press, 1981).

SP             Speech and Phenomena, David Allison (trans.),

(Evanston Ill.: Northwestern University Press, 1973).

TM       Rodolphe Gashe, The Tain of the Mirror: Derrida and the Philosophy of  Reflection

(London:  Harvard  University Press, 1986).




[1] Edmund Husserl, Logical Investigations,  IV,  pp. 501-503.

[2] SP, p. 99.

[3] TM, pp. 248-249.

[4] Cited in D, p. xx.

[5] Ibid., p. 221.

[6] TM, p. 250.

Tagged with:

What Derrida is Realist About

On November 15, 2011, in Uncategorized, by enemyin1

There’s an instructive debate going on between Graham Harman at Object Oriented Philosophy (henceforth OOO) and Levi Bryant over at Larval Subjects (henceforth LS) about whether Derrida’s work is serviceable for realism. OOO is emphatic: not only is Derrida not a ‘plug and play’ realist, his work has no realist application at all. Unlike Heidegger – whose account of withdrawal can be given a realist spin in Object-Oriented circles – Derrida’s position is not amenable to realist use or even to creative abuse. Here’s OOO:

I think it’s simply madness to call Derrida a realist. His entire argument makes sense only by identifying realism with onto-theology and hence with parousia/presence. He reads the concept of substance as the foot soldier of onto-theology. His critique of the proper is a very frank critique of realism. His theory of the trace is another anti-realist maneuver, not a realist one since that would open the door, in his view, to the “transcendental signified.”

There’s obvious textual support for OOO’s position. Derrida does claim in Of Grammatology that infrastructures like trace and différance provide a condition of possibility for presence and ‘onto-theological’ thinking without being presences or grounding entities themselves. Indeed, for Derrida, they provide the invisible underside or ‘tain’ of all thought, reflection or representation.

The term Différance, like its cognate infrastructural markers ‘trace’ and ‘supplement’ and ‘iterability’, is an economical allusion to structures of negation, co-involvement and co-implication within general textuality. Textuality, for Derrida, should not be identified with language. A text, according to Derrida, is any structure that can be characterized by such operations and relationships. For example, any text will have to consist of elements that are minimally repeatable: ‘A sign which would take place but “once” would not be a sign: a purely idiomatic sign would not be a sign’ (SP, 50) Language is the paradigm of this, but Derrida argues that even the neural memory trace within Freud’s prototype theory of neural networks has to be reactivatable to do its job – though each reactivation alters the relative amenability to stimulation that differentiates it from other memory traces (WD). Derrida’s analysis of the neural trace in ‘Freud and the Scene of Writing’ meanwhile refers to his earlier analysis of Husserl’s account of temporal awareness. Again, this requires any ‘now’ to be implicated with a retained past while potentiating a not yet determinate, novel future. Thus as Derrida claims in ‘Signature Event Context’ structures like spacing, trace and iterability are invariants. They extend to all representation, to all experience (LI 10).

Derrida’s claim about general textuality may all seem like an excessively subtle way of saying that meaning and content cannot be instantiated in formless pap. However, the infrastructural account has the virtue of extreme generality. It is something very much like a textual ontology – even if JD never conceded this.

Enter LS who makes the central point that iterability (one of the textual infrastructures) requires that entities cannot be dissolved into their relations. Since he is an object-oriented philosopher he frames this as a claim about objects: ‘For Derrida, it seems, any object can be severed from its relations to other objects.’ This is important because Derrida is usually cast as an arch-holist. But it is obvious to anyone who reads him carefully that this is not the case. LS is alluding, of course, to passages such as following one from ‘Signature Event Context’:

Every sign, linguistic or nonlinguistic, spoken or written (in the usual sense of this opposition), in a small or large unit, can be cited, put between quotation marks: in so doing it can break with every given context, engendering an infinity of new contexts in a manner that is absolutely illimitable . . . This citationality, this duplication or duplicity, this iterability of the mark is neither an accident nor an anomaly, it is that (normal/abnormal) without which a mark could not even have a function called ‘normal’ (LI, p. 12).

So while Derrida may not be a realist, it is clear that he cannot be a holist. No text is exhausted by its passing affiliations. This also means that Derrida cannot be a relativist since relativism requires relativization to some constraining super-context. Iterability says, in effect, that there is no super-context: all contexts are fragile and open. ‘Mass’ may play a different role in Newton to the role it plays in Einstein (for whom there is both relativistic and proper mass) but this does not mean that the two terms are just their respective theoretical roles. Can this point be generalized to get us something like realism? Well, we need to ask: ‘Realism with respect to what?’ Both LS and OOO use the idiom of things or objects. So if LS is right and iterability requires that things be reusable from context to context and Derrida is committed to iterability, then Derrida is committed to things. Ergo, he’s a realist about objects. But OOO is probably right to insist that Derrida’s no thing fan.

However, it may be that Derrida has ontological commitments to things other than things. An iteration like my quotation/mention of ‘if’ in this sentence is an event. For texts (in the general sense) to work there need to be events that are both differentiated and repeatable. What makes this further ‘if’ a token of the same type as this ‘if’ is not its instantiation of a common signifying essence but its iterability. So Derrida is committed to events and he’s committed to relations of repetition between event instances. This means that he’s committed to repeatable events, of course. But there are different models of repetition. Here’s Nelson Goodman:  ‘

Repetition as well as identification is relative to organization. A world may be unmanageably heterogeneous or unbearably monotonous according to how events are sorted into kinds (WWW, 9).

THIS is relativism: repetition is relative to organizing scheme. But it’s clear that Derridean repetition cannot be scheme-relative in this sense because that would limit iteration to super-contexts and iteration is ‘absolutely illimitable’. So, as I argued long ago in RQRR, we have to say that Derridean repetition is real repetition. Since repetition occurs to events, these must be structurally repeatable. Derridean events are repeatable particulars, however, not abstract events of the kind posited by Ronald Chisholm. So Derrida is a) not a relativist and b) he is ontologically committed to repeatable particular events and their repetitions. So Derrida is a realist with regard to events and their repetition. However, these occurrences are realized they occur independently of organizing schemes or concepts. They are mind-independent, then, insofar as their occurrence does not depend on the constitutive activity of subjects and language users.






LI         Limited Inc., Samuel Weber and Jeffrey Mehlman (trans.),

(1977; Evanston Ill.: Northwestern University Press, 1988).

OG       Of Grammatology, Gayatri Chakravorty Spivak (trans.),

(London: Johns Hopkins University Press, 1976).

SP        Speech and Phenomena, David Allison (trans.),

(Evanston Ill.: Northwestern University Press, 1973).

WD      Writing and Difference, Alan Bass (trans.),

(1967; London: Routledge and Kegan Paul, 1978).


WWW       Nelson Goodman, Ways of World Making (Indianapolis: Hacket, 1978).

RQRR,      David Roden, ‘Radical Quotation and Real Repetition’, Ratio (new series) XVII 2 June 2004, 191-206.



Martin Hägglund on Derrida, Trace and Life

On November 7, 2011, in Uncategorized, by enemyin1


In “The Trace of Time and the Death of Life: Bergson, Heidegger, Derrida” Martin Hägglund gives a brilliantly clear exposition of Derrida’s trace as a relationship that undermines both the continuity and punctate discreteness of time and poses an “arche-materiality” of time against a vitalistic/continuist conception of temporality.

The trace-structure is the minimal form of any temporality – an inextricable relation to a past that has never been present. Derrida might, on first reading, appear to endorse something like a vitalist or continuist conception of time. He accepts that temporality requires the displacement of temporal event from itself: a series of absolutely independent nows would not be a temporal series, any more than an unrepeatable sign could signify anything.

However, it is not merely the time of consciousness or life: of memory and habit, say. According to Derrida, this displacement is always “inscribed” in some material-spatial medium. E.g. Freud’s purely neurological trace consists of difference in the conduciveness of neural pathways to stimulation – a primary basis for memory which is always repeated differently (iterated) as a result of the causal action on neural tissue of subsequent stimuli.

The synthesis of time cannot be appropriated without spatial support by an immaterial life or subjectivity, or Dasein, etc.Haggelund concludes that this implies an asymmetric dependence of life on matter. The living depends on the non-living but is contingent product of a physical nature characterized by an arche-material temporality. Life, consciousness etc. depends on the material existence of the trace but not vice versa. The trace is (somehow) built into physical reality but it is equally implicit in inorganic or mechanical existence. The zombie-like repetition of the trace is as implicated in the most vivid conscious experience as it is in the evolution of material inorganic structures.


On May 27, 2011, in Uncategorized, by enemyin1

Transcript of a paper given at Nottingham University’s Psychoanalysis and the Posthuman Conference, Sept 7, 2010.

Mankind’s a dead issue now, cousin. There are no more souls. Only states of mind.[1]


Since emerging in nineties critical theory, transhumanism and cyberpunk literature, the term ‘posthuman’ has been used to mark a historical juncture at which the status of the human is radically in doubt. Two main usages or, if you will, two distinct posthumanisms can be discerned over this period.


Transhumanists, futurists and science fiction authors regularly concatenate or hyphenate ‘post’ and ‘human’ when speculating about the long-run influence of advanced technologies on the future shape of life and mind.


By contrast, for cultural theorists and philosophers in the ‘continental’ tradition the posthuman is a condition in which the foundational status of humanism has been undermined. The causes or symptoms of this supposed crisis of humanism are various as the bio-engineered ‘clades’ ramifying through the post-anthropoform solar system of Bruce Sterling’s 1996 novel Schismatrix. Posthumanism, in this diagnostic or critical sense, is expressed in the postmodern incredulity towards enlightenment narratives of emancipation and material progress; the deconstruction of transcendental or liberal subjectivities; the end of patriarchy; the emergence of contrary humanisms in post-Colonial cultures; the reduction of living entities to resources for a burgeoning technoscience, or, if some theorists are to be believed, all of the above.[2]


In this paper, I will argue that these two usages do not only reflect divergent understandings of the posthuman – the speculativeand the critical – but reflect a foreclosure of radical technogenetic change on the part of critical posthumanists. This gesture can be discerned in four arguments which occur in various forms within the extant literature of critical posthumanism:


  • the anti-humanist argument
  • the technogenesis argument
  • the materiality argument
  • and the anti-essentialist argument


All four, as I hope to show, are unsound.


Analysing why these arguments fail has the dual benefit of preventing us from being distracted by the anti-humanist hyperbole accruing to theoretical frameworks employed within critical posthumanism –  such as deconstruction and cognitive science – but, more importantly, contributes to the development of a rigorous, philosophically self-aware speculative posthumanism.


*    *    *

Contemporary transhumanists argue that human nature is an unsatisfactory ‘work in progress’ that should be modified through technological means where the instrumental benefits for individuals outweigh the technological risks. This ethic of improvement is premised on prospective developments in four areas: Nanotechnology, Biotechnology, Information Technology and Cognitive Science – the so-called ‘NBIC’ suite. For example, improved bionic neural interfaces may allow the incorporation of a wide range of technical devices within an enhanced ‘cyborg’ body or ‘exo-self’ while genetic treatments may increase the efficiency or learning or memory (Bostrom and Sandberg 2006)  or be used to increase the size of the cerebral cortex. The wired and gene-modified denizens of the transhuman future could be sensitive to a wider range of stimuli, faster, more durable, more intellectually capable and morphologically varied than their unmodified forebears.


Just how unrestricted and capable transhuman minds and bodies can become is contested since the scope for enhancement depends both on often hypothetical technologies and upon hotly contested metaphysical claims. Among the prospective technologies which excite radical transhumanists like Ray Kurzweil are the use of ‘micro-electric neuroprostheses’ which might non-invasively stimulate or probe the brain’s native neural networks, allowing it to jack directly into immersive cognitive technologies or map its ‘state vector’ prior to uploading an entire personality (Kurzweil 2005, 317);[3] the elusive goal of ‘artificial general intelligence’ – the creation of robots or software systems which approximate or exceed the flexibility of human belief-fixation and comportment; or, perhaps less speculatively, improvements in processor technology sufficient to emulate the computational capacity of human and other mammalian brains (Ibid. 124-125).


Among the metaphysical issues that trouble all but the most facile of transhumanist itineraries is the scope of functionalist accounts of mental states and processes. Functionalist philosophers of mind claim that the mental states types such as beliefs or pains are constituted by the ‘causal role’ of token states within a ‘containing system’ rather than by the stuff that the system is constituted from. The causal role of a token state is defined by the set of states that can bring it about (its inputs) and set of the states that it causes in turn (its outputs). The substrate on which that state is realized is irrelevant to its functional role.[4] Some philosophers of mind – David Chalmers, say – are functionalists with regard to representational states like beliefs or desires, but not with regard to phenomenal states, like having a toothache or seeing pink. If Chalmers is right, then we can never produce artificial consciousness purely in virtue of emulating the kinematics of brain states. However, if we accept the accounts of philosophers with (however divergent) functional analyses of the property of state consciousness like Daniel Dennett and Michael Tye the prospects for artificial consciousness seem somewhat brighter (Dennett 1991). Given a sufficiently global functionalism, a simulation of an embodied nervous system in which these constitutive relationships were actually instantiated would also be a replication lacking none of the preconditions for intentionality or conscious experience regardless of whether they were implemented with biological material as this is currently understood. For radical transhumanists influenced by functionalist and computationalist approaches in the philosophy of cognitive science, then, neural replication opens up the possibility of copying the patterns that constitute a given mind onto non-biological platforms that will be inconceivably faster, more flexible and more robust than evolved biological bodies (Kurzweil 2005).


These radical augmentation scenarios indicate to some that a future convergence of NBIC technologies could lead to a new ‘posthuman’ form of existence. Following an influential paper by the computer scientist Virnor Vinge, this ontological step change is sometimes referred to as ‘the technological singularity’ (Vinge 1993): an epochal ‘discontinuity’ resulting from positive feedback exerted by technical change upon itself (Bostrom 2005, 8). Characteristically the scenario is painted in terms of the creation of artificial super-intelligence – intelligence being the variable considered most liable to affect the rate of technical growth. Vinge claims that were a single super-intelligent machine created, it could create still more intelligent machines, resulting in a growth in mentation to plateaux far exceeding our current capacities. Lacking this intellectual prowess, we cannot envisage some of the ways post-singularity intelligences might re-order the world. A post-singularity world would be constituted in ways that cannot be humanly conceived. If it could be humanly conceived it would not be the genuine article. The idea of the singularity, then, is that of a principled limit on human cognition, and predictive power, in particular. It is homologous, in many respects, to Immanuel Kant’s idea of the thing-in-itself, which lacking any mode of presentation in the phenomenal world of space and time must necessarily elude systematic empirical knowledge.


Commitment to the possibility of a singularity nicely exemplifies the philosophical position of speculative posthumanists. Posthumans in this sense are hypothetical ‘descendants’ of current humans that are no longer human in consequence of some augmentation history.


For speculative (or pre-critical) posthumanism, a technically mediated transcendence of the human constitutes a significant ontological possibility.


Speculative posthumanism is logically independent of the normative thesis of transhumanism: one can be consistently transhumanist while denying the ontological possibility of posthuman transcendence. Similarly, speculative posthumanism is consistent with the rejection of transhumanism. One could hold that a posthuman divergence is a significant ontological possibility but not a desirable one.[1]


Critical posthumanists such as Katherine Hayles, Andy Clark, Don Ihde and Neil Badmington do not contest the potential of NBIC technologies or advance principled arguments against enhancement (Clark is a warm-blooded, moderate transhumanist according to my taxonomy) but argue that speculative or pre-critical posthumanism reflects a philosophically naïve conception of the human such that the posthuman would constitute a radical break with it. This position is clearly implied in the title of Katherine Hayles’ seminal work of cultural history How We Became Posthuman. For Hayles, the posthuman is not a hypothetical state which could follow some prospective singularity event, say, but a work in progress: a complex and contested re-conception of the human subject in terms drawn from the modern ‘sciences of the artificial’: information theory, cybernetics, Artificial Intelligence and Artificial Life (Hayles 199, 286).


One example of the intellectual tendencies that inform this new cultural moment is so-called ‘Nouvelle AI’ (NAI). Where the manipulation of syntactically structured representations is the paradigm of intelligence traditional AI, NAI draws inspiration from computational prowess exhibited in biological phenomena involving no symbolization, such as swarm intelligence, insect locomotion or cortical feature maps. The guiding insight of NAI is that the preconditions of intelligence – such as error-reduction strategies, pattern recognition or categorization – can emerge in biological systems from local interactions between dumb specialized agents (like ants or termites) without a central planner to choreograph their activities.


If human mentation ’emerges’ likewise from millions of asynchronous, parallel interactions between dumb components, Hayles argues, there is no classically self-present ‘human’ subjectivity for the posthuman to transcend. Mental powers of deliberation, inference, consciousness, etc. are already distributed between biological neural networks, actively-sensing bodies and artefacts (Hayles 1999, 239).


I have christened this ‘the anti-humanist objection to posthumanism’ given its striking similarities to the deconstruction of subjectivist philosophy and phenomenology undertaken in post-war French anti-humanisms – Derrida’s in particular (Ibid. 146). Hayles’ proximate target, here, is the putatively autonomous subject of modern liberal theory. The ‘autonomous liberal subject’, she argues, is unproblematically present to itself and distinct from the conceptually-ordered world in which it works out its plans for the good (Ibid. 286). The posthuman subject, by contrast, is problematically individuated, because its agency is constituted by an increasingly ‘smart’ extra-bodily environment on which its cognitive functioning depends and because of the open, ungrounded materiality – or ‘iterability’ – of language which is both arrested by the context of embodied action and infected by its opacity (Derrida 1988 152; Hayles 1999, 264-5). The decentered or distributed posthuman subject is no longer sufficiently distinct from the world to order it autonomously as the subject of liberal theory is required to do.


But is this right?


Let’s suppose, along with Hayles and other proponents of embodied and distributed cognition, that the skin-bag is an ontologically permeable boundary between self and non-self (or exo-self). Proponents of the extended mind thesis like Andy Clark and David Chalmers argue from a principle of ‘parity’ between processes that go on in the head and any functionally equivalent process in the world beyond.[5] The parity principle implies that mental processes need not occur only in biological nervous systems but in the environments and tools of embodied thinkers. If I have to make marks on paper to keep in mind the steps of a lengthy logical proof, the PP states that my mental activity is constituted by these inscriptional events as well as by the knowledge and habits reposing in my acculturated neural networks.


However, given the parity between bodily and extra-bodily processes, this cannot make the activity less evaluable in terms of the rationality standards we apply to deliberative acts. Even if the humanist subject emerges from the summed activities of biological and non-biological agents, this metaphysical dependence (or supervenience) need not impair its capacity to subtend the powers of deliberation or reasoning liberal theory requires of it.[6] Derrida’s more systematic deconstruction of the semantically constitutive subject nuances this picture by entailing limits on the scope of practical reason in the face of the ‘outside’ or exception which infects any rule-governed system (Derrida 1988, p.152). The rule or desire is always precipitate, in this way, but there is a difference between being ahead of oneself and being be-headed. The posthuman, in Hayles critical sense of the term, is not less human for confronting the fragile, constitutively precipitate character of cognition and desire.


This is not to say, of course, that there is no merit in the model of the hybrid self that Hayles presents as ‘posthuman’ or that it has no implications for pre-critical or speculative posthumansim. On the contrary, a ‘deconstruction’ of the classically constitutive subject of post-Cartesian thought is, I have argued, a useful prophylactic against immaterialist fancies or transcendentally inspired objections to the naturalizing project of cognitive science (Roden 2006). However, the naturalization of subjectivity and mind is at best a conceptual precondition for envisaging certain transcendent posthumanist itineraries involving the emergence of artificial minds from new technological configurations of matter. It does not represent their culmination.


There are two other objections that may potentially survive this analysis. Firstly, it could be objected that critical posthumanism – like the extended mind thesis – shows that the human is “always already” technically constituted. In her contribution to a recent Templeton Research Seminar on transhumanism Hayles argues that transhumanists are wedded to atechnogenetic anthropology for which humans and technologies have existed and co-evolved in symbiotic partnership. Not only would future transhuman enhancement be a technogenetic process; but so, according to this story, are comparable transformations in the deep past. Human technical activity has, for example, equipped some with lactose tolerance or differential calculus without monstering the beneficiaries into posthumans. One of the proponents of the extended mind thesis, Andy Clark, has framed the technogenesis argument against posthumanism particularly clearly in his book Natural Born Cyborgs:


The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game. . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, p. 142).


‘Natural born cyborgs, as suggested, are already dealers in hybrid mental representations which exploit both a linguistically mapped environment and our multifariously talented brains. This is significant because our capacity to ascribe structured propositional attitudes to others arguably presupposes the capacity to use language to represent their contents. Representing the contents of beliefs is necessary for evaluating them and it is independently plausible to suppose that, as Donald Davidson argues in his essay ‘Thought and Talk’, having the capacity to evaluate beliefs is part of what is required in a believer (Davidson 1984).


Clearly, if we restrict the evidence base to cases where augmentation has not resulted in a species divergence or something very like it, then we will induce that this is not liable to happen in the future. However, some pre-human divergence had to have happened in our evolutionary past and it is at least plausible – given the ‘natural born cyborgs’ thesis – that technologies such as public symbol systems were a factor in the hominization process. Given a pre-human divergence has occurred in the past, perhaps due to evolutionary pressures brought about the development of simpler symbolization techniques, why preclude the possibility that convergent NBIC technologies might prompt a similar step change in the future?


I have argued elsewhere that a cognitive augmentation that replaced public language with a non-symbolic vehicle of cognition and communication might – assuming Clark’s account of hybrid representations – lead to the instrumental elimination of propositional attitude psychology through the elimination of its public vehicles of content. Post-folk folk might, arguably, be opaque to the practices of intentional interpretation we bring to bear in ‘our’ – i.e. ‘human’ – social intercourse and thus might well form initially discrete social and reproductive enclaves that might later seed entirely posthuman republics.


Another of Hayles’ objections to standard posthumanists visions of transcendence is their supposed elision of the materiality of human embodiment and cognition: the materiality argument. The fact that computer simulations can help us understand the self-organizing capacities of biological systems does not entail that these can be fully replicated by some system by virtue of implementing a sufficiently fine-grained software representation of their functional structure.


It is true that some posthumanist scenarios presuppose that minds or organisms can be fully replicated on speculative non-biological substrates like the computronium or ‘smart matter’ imagined in Ken MacLeod’s Fall Revolution novels. However, this objection applies to a fairly restricted class of posthuman transcendence itineraries: namely those involving the replication ofexisting minds and organisms in computational form. Although Hayles provides no arguments against pan-computationalism or global-functionalism, it might well be the case that synthetic-life forms or robots, being differently embodied, will be differently-minded as well (who knows?). In this case, the materiality of embodiment argument works in favour of the pre-critical posthumanist account, not against it. On the other hand, she may be wrong and the pan-computationalists right. Mental properties of things may, for all we know, supervene on their computational properties because every other property supervenes on them as well.


I turn, finally, to an objection that is perhaps implicit rather than explicit in the arguments of Critical Posthumanists to date but is worth considering on its own, if only for its speculative payoff. I refer to this as the anti-essentialist argument.


The anti-essentialist objection to posthumanism starts from a particular interpretation of the disjointness of the human and the posthuman. This is that the only thing that could distinguish the set of posthumans and the set of humans is that all posthumans would lack some essential property of humanness by virtue of their augmentation history. It follows that if there is no human essence – no properties that humans possess in all possible worlds – there can be no posthuman divergence or transcendence.


This is a potentially serious objection to speculative posthumanism because there seem to be plausible grounds for rejecting essentialism in the sciences of complexity or self-organization that underwrite many posthumanist prognostications. Some philosophers of biology hold that the interpretation of biological taxa most consonant with Darwinian evolution is that they are not kinds (i.e. properties) but individuals. Evolution by natural selection is a form of self-organisation involving feedback relationships between the distribution of genetic traits across populations and their phenotypic consequences in particular environments. An individual or proto-individual can undergo a self-organizing process, but an abstract kind or universal cannot. Thus, the argument goes, evolution happens to species qua individuals (or proto-individuals) not species qua kinds.  To be biologically ‘human’ on this view is not to exemplify some set of necessary and sufficient properties, but to be genealogically related to earlier members of the population of humans (Hull 1988).


Clearly, if biological categories are not kinds and posthuman transcendence requires the technically mediated loss of properties essential to membership of some biological kind, then posthuman transcendence envisaged by pre-critical posthumanism is metaphysically impossible. [7]


Underlying the anti-essentialist objection is the assumption that the only significant differences are differences in the essential properties demarcating natural kinds. But why adhere to this philosophy of difference?[8] The view that nature is articulated by differences in the instantiation of abstract universals sits poorly with the idea of an actively self-organizing nature underlying the leading edge cognitive and life sciences. A view of difference consistent with self-organization would locate the engines of differentiation in those micro- components and structural properties whose cumulative activity generates the emergent regularities of complex systems.


For example, we might adopt an immanent ontology of difference for which individuating boundaries are generated by local states of matter: such as differences in pressure, temperature, miscibility or chemical concentration (Delanda 2004). For immanent ontologies of difference –that of Gilles Deleuze, say – the conceptual differences articulated in the natural language kind-lexicons are asymmetrically dependent upon active individuating differences (Ibid. 10). A Deleuzean ontology is obviously not the only option here: any ontology which reconciles the existence of real or radical differences with the lack of transcendent or transcendental organizing principles would do.


In short: we can be anti-essentialists and anti-Platonists while holding that the world is profoundly differentiated in a way that owes nothing to the transcendental causality of abstract universals, subjectivity or language.




I have argued that critical posthumanists provide few convincing reasons for abandoning pre-critical or speculative posthumanism. The anti-essentialist argument presupposes a model of difference that is ill-adapted to the sciences that critical posthumanists cite in favour of their naturalized deconstruction of the human subject. The deconstruction of the humanist subject implied in the anti-humanist objection may itself be a useful prolegomenon to a posthuman-engendering cognitive science; but it complicates rather than corrodes the philosophical humanism that critical posthumanism problematizes while leaving open the possibility of a radical differentiation of the human and the posthuman. The technogenesis objection is weak, if conceptually productive. The elision of materiality argument is based on problematic assumptions and, even if sound, would preclude only some scenarios for posthuman divergence.


Of these, the anti-essentialist objection seems the strongest and most wide ranging in its implication. Our response to it suggested that it might be circumventable with an immanent ontology of emergent differences such as Deleuze’s ontology of the virtual. However, a consequence of embracing locally emergent differences in this way is that there can be no adequate concept of posthuman difference without posthumans. For it is surely a consequence of any such account that a science of the different cannot precede its historical emergence or morphogenesis, even if only in simulated form. This implies that the posthuman is at best a placeholder signifying a possibility that we cannot adequately conceptualize ahead of its actualization. However, this does not preclude a theoretical development of the implications of the posthuman insofar as we can conceptualize it.


Moreover, the emptiness of the signifier ‘posthuman’ has an ethical or, perhaps, ‘anti-ethical’[9] consequence that arguably should be considered more fully in the light of Derrida’s remarks about the precipitate character of thought. If the speculative idea of the posthuman is a placeholder for differences that are determinable only via some synthetic process – such as the creation of actual posthumans, modified transhumans, or a range of simulations or aesthetic models (as in cybernetic art) – these differences can be determined only by progressive actualization. Thus posthumanist philosophy is locked into a dialectically unstable preterition, falling between speculative and synthetic activity. To understand what it as yet undetermined, it must attempt – however incrementally – to bring it into being and to give it shape.



Bostrom, Nick (2005), ‘A History of Transhumanist Thought’, Journal of Evolution and Technology, 14 (1).

____(2008), ‘Why I Want to Be Posthuman When I Grow Up’, B. Gordijn, R

Bostrom N, Sandberg A (2006), ‘Converging Cognitive Enhancements’, Ann. N.Y. Acad. Sci. 1093: 201–227.

Chadwick (eds.), Medical Enhancement and Posthumanity, Springer.

____ (2005b), ‘In Defence of Posthuman Dignity’, Bioethics 19(3), pp. 203-214.

Clark, Andy (2003), Natural Born Cyborgs, (Oxford OUP).

___’Language, Embodiment and the Cognitive Niche’, Trends in Cognitive Science 10(8), pp. 370-374)

___(1993)Associative Engines, (MIT Bradford).

___(2006) ‘Material Symbols’, Philosophical Psychology Vol. 19, No. 3, June 2006, pp. 291–307.

Clark Andy, Chalmers David (1998), ‘The Extended Mind’, Analysis 1998 58(1), pp. 7-19.

Churchland, Paul (1998), ‘Conceptual Similarity Across Sensory and Neural Diversity: The             Fodor/LePore Challenge Answered’, Journal of  Philosophy , XCV, No.1, pp. 5-32.

_____(1995). The Engine of Reason, The Seat of the Soul (Cambridge Mass.: MIT  Press.

_____(1989)‘Folk Psychology and the Explanation of Human Behaviour,’ Philosophical Perspectives 3, pp. 225–241.

____(1981), ‘Eliminative Materialism and the Propositional Attitudes’, Journal of Philosophy 78(2), 67-90.

Cilliers, Paul (1998). Complexity and Postmodernism. London: Routlege.

DavidsonDonald (1984) ‘Thought and Talk’, in Inquiries into Truth and Interpretation (Oxford, Clarendon Press), pp. 155-170.

Deacon, Terrence (1997), The Symbolic Species: The Co-evolution of Language and the Human Brain (London: Penguin).

DeLanda, Manuel (1997), ‘Immanence and Transcendence in the Genesis of Form’, South Atlantic Quarterly96:3, Summer 1997, pp. 499-514.

_____(2004), Intensive Science & Virtual Philosophy, London: Continuum.

Deleuze, Gilles and Guattari, Felix (1992), A Thousand Plateaus, Brian Massumi (trans.). London: Athlone.

Derrida, Jacques (1986), Margins of Philosophy, Alan Bass (trans.). Brighton: Harvester Press), pp. 209-271.

___(1988), Limited Inc. Samuel Weber (trans.). Northwestern University Press.

___(2002), Acts of Religion, Gil Anidjar (ed.). New York: Routledge.

Daniel, Dennett (1991). Consciousness Explained. London: Penguin.

Fukuyama, Francis (2002), Our Posthuman Future: Consequences of the Biotechnology Revolution (London: Profile Books).

Hayles, N Katharine (1999), How We Became Posthuman: Virtual bodies in Cybernetics, Literature and Informatics (Chicago: University of Chicago Press).

Hull, David (1988), On Human Nature, in PSA 1986, vol. 2, A. Fine and P. Machamer (eds.), East Lansing, MI: Philosophy of Science Association, pp. 3-13; reprinted in Hull (1989) and Hull and Ruse (eds.), Philosophy of Biology (1998).

Jones, Richard (2009), ‘Brain Interfacing with Kurzweil’, http://www.softmachines.org/wordpress/?p=450Accessed08.09.2009.

Kurzweil, Ray (2005), The Singularity is Near (New York Viking).

LaPorte, Joseph (2004), Natural Kinds and Conceptual Change (Cambridge CUP). Lisewski, Andreas Martin (2006), ‘The concept of strong and weak virtual reality’, Minds and Machines 16, 201–219.

Lycan, William G. (1999), ‘The Continuity of Levels of Nature’, in William Lycan (ed.) Mind and Cognition(Oxford Blackwell), pp. 49-63.

MacLennan, B.J. (2002), Transcending Turing Computability, Minds and Machines 13: 3–22.

Marx, Karl and Engels, Frederick (1994), The German Ideology, C.J. Arthur (Ed.). London: Lawrence and Wishart.

Mackenzie, Adrian (2002), Transductions: bodies and machines at speed (London: Continuum).

Patton, Paul (2007), ‘Utopian Political Philosophy: Deleuze and Rawls’, Deleuze Studies 1, pp. 41-59.

Rawls, John (1999), A Theory of Justice (Harvard University Press).

Simondon, Gilbert (1989), Du mode d’existence des objets techniques (Editions Aubier).

Shragrir, Oron (2006), ‘Why we View the Brain as a Computer’, Synthese (153), pp 393-416.

Soper, Kate (1986), Humanism and Anti-humanism. London: HarperCollins.

Sterling, Bruce (1996), Schismatrix Plus, (Berkley, New York).

Vinge, Vernor (1993) [online] ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’,http://www.rohan.sdsu.edu/faculty/vinge/misc/singularity.html Accessed 2008.24.04.




[1] Sterling (1996), p. 59.

[2] This appears to be the position of Rosi Braidotti in her recent plenary address to the 2009 Society for European Philosophy and Forum for European Philosophy Conference in Cardiff.

[3] For a rather less sanguine commentary on the state of the art in non-invasive scanning see Jones 2009.

[4] By analogy, any system could count as being in the state White Wash Cycle if inputting dirty whites at some earlier time resulted in it outputting clean whites at some later time.


Parity Principle. If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process.(from Clark and Chalmers (1998) p.XX)


[6] The notion of supervenience is frequently used by non-reductive materialists to express the dependence of mental properties on physical properties without entailing their reducibility to the latter. Informally:  M properties supervene on P properties if a thing’s P properties determine its M properties. If aesthetic properties supervene on physical properties, if x is physically identical to y and x is beautiful, y must be beautiful. Supervenience accounts vary with the modal force of the entailments involved. ‘Natural’ or ‘nomological’ supervenience holds in worlds whose physical laws are like our own. ‘Metaphysical supervenience’, on the other hand, is often claimed to hold with logical or conceptual necessity.

[7] This objection is overdetermined, further, by the fact that the possibility of successfully implementing radical transhumanist policies seems incompatible with a stable human nature. If there are few cognitive or body invariants that could not – in principle – be modified with the help of some hypothetical NBIC technology – then transhumanism arguably presupposes that there are no such essential properties for humanness. Transhumanism might still be consistent with an etiolated historical essentialism which holds that any being descended from from a member of soe hypothetical ancestor population is human.

[8] David Hull points out that the genealogical boundaries between species can be considerably sharper than boundaries in ‘character space’ (Hull 1988, 4). The fact that nectar-feeding hummingbird hawk moths and nectar-feeding hummingbirds look and behave in similar ways does not invalidate the claim that they have utterly distinct lines of evolutionary descent (Laporte 2004, 44).

[9] In her address to the Cardiff, SEP-FEP conference, ‘The Ethics of Extinction’ Claire Colebrook argued that while ethos implies habit, place and environment, situations of catastrophic change (e.g. climate change) imply the need to overcome  these rooted modes of action and affect. Hence the prospect of humanity being superseded by non-humans requires an anti-ethics which imagines or simulates the radically non-human.

[1] Although some hold that the singularity is ‘beyond good or evil’, one might hold that certain posthumans would be worse off than even the most miserable human; a possibility that could warrant anti-transhumanist policies such as technological relinquishment or pre-emptive species suicide.