This is the full text of my presentation at the improvisation panel at The Society of European Philosophy-Forum of European Philosophy joint conference in Dundee, 2015.


1) Introduction: Improvisation and the Politics of Technology


Ray Brassier’s “Unfree Improvisation/Compulsive Freedom” (written for the 2013 collaboration with Basque noise artist Mattin at Glasgow’s Tramway) is a terse but insightful discussion of the notion of freedom in improvisation. It begins with a polemic against the voluntarist conception of freedom. The voluntarist understands free action as the uncaused expression of a “sovereign self”. Brassier rejects this supernaturalist understanding of freedom, arguing that we should view freedom not as the determination of an act from outside the causal order, but as the self-determination by action within the causal order.

According to Brassier, self-determination is reflexive and rule-governed. A self-determining system acts in conformity to rules but is capable of representing and modifying these rules with implications for its future behaviour. This is only possible if we make the rules explicit in language (Brassier 2013b: 105; Sellars 1954: 226).

Brassier’s proximate inspiration for this model of freedom is Wilfred Sellars’ account of language and meaning (1954.) Sellars analytic pragmatism buys into the Kantian claim that concepts are rules for unifying or linking claims to cognitive significance rather than representations of something outside thought – concepts are “cooks rather than hooks”.[1]

Sellars distinguishes a more or less automatic and unconscious rule following from a metalinguistic level with the logical resources for reflection and self-awareness. Indeed for Brassier’s Sellars, thought and intentional action emerge only with the metalinguistic capacity to make reasons explicit in “candid public speech” (Brassier 2013b: 105; Sellars 1954: 226-8).[2] Or as he puts it: “Autonomy understood as a self-determining act is the destitution of selfhood and the subjectivation of the rule. The ‘oneself’ that subjects itself to the rule is the anonymous agent of the act.”

For Brassier, an avowed naturalist, it is important that this capacity for agency is non-miraculous, and that a mere assemblage of pattern governed mechanisms can be “gripped by concepts” (Brassier 2011). As he continues:

The act …. remains faceless. But it can only be triggered under very specific circumstances. Acknowledgement of the rule generates the condition for deviating from or failing to act in accordance with the rule that constitutes subjectivity. This acknowledgement is triggered by the relevant recognitional mechanism; it requires no appeal to the awareness of a conscious self…. (Brassier 2013a)

Now, there are criticisms that one can make of this account. For example, Brassier struggles to articulate the relationship between linguistic rules or norms and the natural regularities and behaviours on which they depend. For this reason, I’ve argued that the normative functionalism associated with Sellars and, latterly, Robert Brandom bottoms out in Davidson-style claims about how idealized interpreters (privy to all the facts) might construe a given stretch of behaviour (Roden 2015).

Brassier’s position thus depends on the conception of an interpreting subject it is not in a position to satisfactorily explain. Despite pretensions to naturalistic virtue, his world is bifurcated between a natural real and an order of thought that depends on it without really belonging to it (Brassier 2013: 104).

These issues lurk in the background in Brassier’s short text on improvisation. This claims that the act of improvisation involves an encounter between rule governed reason and pattern governed mechanisms. However, Brassier does not specify how such rules operate in music, or how the encounter between rules and mechanism is mediated.

In what follows I will argue that one reason he does not do this is that such rules do not operate in improvisation or in contemporary compositional practice. Claims about what is permissible or implied in music index context sensitive perceptual responses to musical events. These exhibit tensions between the expectations sedimented in musical culture and actual musical events or acts.

However, I will argue that this perceptual account of musical succession provides an alternate way of expressing Brassier’s remarks on the relationship between music and history in “Unfree Improvisation” – one that eschews normative discourse in favour of a descriptive account of the processes, capacities and potentialities operating in the improvising situation.

This metaphysical adjustment is of interest outside musical aesthetics and ontology, however.

Brassier’s text suggests that the temporality of the improvising act is a model for understanding a wider relationship with time: in particular the remorseless temporality explored in his writings on Prometheanism, Accelerationist Marxism and Radical Enlightenment (See Brassier 2014). I hope to use this suggestion as a clue for refining an ethics that can address the radically open horizons of being I discuss in my book Posthuman Life (Roden 2014).

This paper can, then, be thought of as a staged encounter between Prometheanism and my own Speculative Posthumanism. Brassier’s Prometheanism, like Reza Negarestani’s “inhumanism”, proposes that all reasons and purposes are “artificial”: implicit or explicit moves within language games. (See Negarestani 2014b). Thus the Promethean rejects all quasi-theological limits on artificialisation and enjoins the wholesale “reengineering of ourselves and our world on a more rational basis” and (2014: 487).

Speculative Posthumanism (SP) does not propose any theological limits to artificialisation. Far from it! However, it holds that the space of possible agents is not bound (a priori) by conditions of human agency or society. Since we lack future-proof knowledge of possible agents this “anthropologically unbounded posthumanism” (AUP) allows that the results of techno-political interventions could be weird in ways that we are not in a position to imagine (Roden 2014: Ch.3-4; Roden 2015b).

The ethical predicament of the Speculative Posthumanist is thus more complex than the Promethean. Given AUP there need be no structure constitutive of all subjectivity or agency. Thus she cannot appeal to a theory of rational subjectivity to support an ethics of becoming posthuman. So what – for example – might autonomy or freedom involve from the purview of unbounded posthumanism? What counts as emancipatory as opposed to oppressive violence?

I will argue that the idea of freedom embedded in Brassier’s text on improvisation can be elucidated by comparing the obscure genesis of improvisation to the predicament of agents in rapidly changing technical systems. Thus Brassier’s treatment of improvisation retains its resonance on this posthumanist reading even if it militates against his wider ontological and political commitments.

2. Harmonic Structure and Succession

I will begin by making use of some analyses of performance practices in post-war jazz and Julian Johnson’s analysis of the disruption of the rhetoric of harmonic accompaniment in the work of Anton Webern to support this model of affective subjectivity in improvisation.

Novice jazz improvisers must internalize a large body of musical theory: e.g. they learn modal variations on the Ionian and harmonic minor scale or “rules” for chord substitution in cadences based on shared tritones. This learning enables musical performance by sculpting possibilities for action during improvisation. For example, ambiguous voicings involving tritones or fourths decouple chords from the root, allowing modulations into what otherwise distant keys to slide easily over a tonal center.

This harmonic know-how consists recipes for honing expectations and sensations, not the acknowledgement of norms. The statement that a tritone (augmented fourth) belonging to a dominant seventh chord should resolve to a tonic reflects listener expectations in diatonic environments where a tonal center is defined in practice. This is not an intrinsic feature of the tritone, though, since each tritone occurs in two dominant chords. For example, the B-F tritone occurs in both G7 (resolving to C) and Dflat7.

This provides a recipe for substituting a dominant chord at a tritone remove in perfect cadences.

However, it also allows harmonic series to modulate into unrelated keys.

As jazz theorist Martin Rosenberg notes, the use of augmented dominants with two tritones by Bebop players such as Charlie Parker and Thelonius Monk produce multiple lines of harmonic consequence and thus an ambiguous context that is not conventionally diatonic, even if (in contrast to free jazz) some adherence to a tonal center is preserved.


Symmetrical chords built of fourths (as used by pianists such as McCoy Tyer and Bill Evans) or major thirds have a similar effect, whether in diatonic contexts (where they can render the tonic ambiguous by stripping it to the 3rd, sixth and ninth) or in modal contexts where a tonal center is still implied by a pedal pass.

In consequence, the home key in the modal jazz developed by Miles Davis and Coltrane never prescribes a series of actions but furnishes expectations that can make an improvisation aesthetically intelligible after the fact. As Rosenberg explains, when Coltrane improvises in modal compositions such as “A Love Supreme” he deploys pentatonic or digital patterns modulated well away from the implied tonal center suggested by a bass line or by the “head” (the tune that traditionally opens or closes a jazz improvisation):

During his solos, Coltrane performs constant modulations through a series of harmonic targets or, what avant-garde architects Arakawa and Gins would call tentative “landing sites” (2002: 10) that become deployed sonically over a simple harmonic ‘home’ through the use of centered and then increasingly distant pentatonic scales from that home. In doing so, Coltrane seeks to widen what I call “the bandwidth” of melodic, harmonic and rhythmic relationships possible. He does so as he maintains the coherence of the melodic line (or narrative) through the aurally comfortable shapes (from the perspective of the audience especially) enabled by those very pentatonic scales, despite the juxtaposition of distant and dissonant tonal centers implied by this method. (Rosenberg 2010: 211-12).

This differential/transformative structure is, not surprisingly, characteristic of scored Western art music. In his analysis of Anton Webern’s Three Little Pieces for Piano and Cello, Op 11, Julian Johnson argues that the opening two bars of the first piece allude to the framing and introduction of melody in traditional song and opera. For example, in baroque recitative the onset of a lyrical melody is frequently indicated by an arpeggiated chord. However, the high register chord that occurs in the first bar of the piece follows a single muted cello note and is followed, in turn, by a descending piano passage, bathetically marking the absence of the expressive melody portended by the chord (Johnson 1998: 277, 272.).


Culturally transmitted musical structures consist of exquisitely context-sensitive patterns of expectation– like the chord/recitative framing relation discussed by Johnson. These exist in tension with the musical act and are transformed in exemplary works.[3] Their linguistic formulations do not prescribe but indirectly describe how musical transitions are perceived and felt. Thus in the context of improvisation and composition, we are not free in virtue of acknowledging or declining musical norms since these are not in place.

Brassier’s conception of autonomy seems ill adapted to musical contexts, then, even we if buy into his naturalist dismissal of the sovereign self. Thus if we are to tease out the implications of his text for posthuman agency, we need to formulate an alternative account of autonomy in improvisational contexts that is not predicated on the acknowledgement of musical norms.

3. The Time of Improvisation

An improvisation takes place in a time window limited by the memory and attention of the improviser, responding to her own playing, to the other players, or (as Brassier recognises) to the real-time behaviour of machines such as audio processors or midi-filters. It thus consists of irreversible acts that cannot be compositionally refined. They can only be repeated, developed or overwritten by subsequent acts.

Improvisation is thus committed to what Andy Hamilton calls “an aesthetics of imperfection” as opposed to a Platonism for which the musical work is only contingently associated with performances or performers (Hamilton 2000: 172). The aesthetics of imperfection celebrates the genesis of a performance and the embodying of the performer in a specific time and space.[4]

If improvisation is a genesis, it implies an irreversible temporality. Composition or digital editing is always reversible. One develops notational variants of an idea before winnowing them down or rejecting them. One hits Ctl/Cmd + Z in the DAW (Digital Audio Workstation) when a mix goes bad.

An improvisation, by contrast, is always a unique and irreversible event on the cusp of another. An omniscient being would be incapable of improvising because its options would be fully known in advance. Unlike the improviser, it could never surprise itself. Its act would be fully represented before it took place and thus reversible.

It follows that an improvisation must exceed the improviser’s power of representation. The improvising agent must work with things or processes that it cannot entirely control or fully know. Paraphrasing Amy Ireland’s discussion of H P Lovecraft and Michel Serres in her excellent paper “Noise: An Ontology of the Avant-garde” improvisation requires a “para-site” – an alien interloper capable of disrupting or perverting the prescribed order of events. In Serres’ retelling of La Fontaine’s tale of the country rat and the city rat, this might be the Master who interrupts the rats’ nocturnal feast and sends the country rat scurrying home. Yet from the human position, it is the rodent feast that interrupts the Master’s sleep. The take home moral of this – for Ireland – is that the context in which a disturbance counts as noise requires a subjectively imposed norm that distorts the radical otherness of an inhuman reality to make it comprehensible for a human subject (Ireland 2014; Roden Forthcoming).[5]

With improvising subjectivity, however, parasitism is the rule – the noise that actualizes an always-tentative decision in real time performance.[6] This sensitive, yet tenuous agency implies a complex disunified subject in the dark about its complexity. As the tagline to Scott Bakker’s ultra-dark near-future thriller Neuropath has it: we are not what we think we are (2010, 2014).

Brassier veers towards such a model at the end of his article. It is, in any case, implied by his naturalistic proposal for explaining the evolution of reasons in terms of the organization of pattern governed physical systems. The freedom of improvisation requires, as he puts it, “an involution of [or reciprocal interaction between] mechanisms” to compose the (“not necessarily human”) agent of the act.

The ideal of ‘free improvisation’ is paradoxical: in order for improvisation to be free in the requisite sense, it must be a self-determining act, but this requires the involution of a series of mechanisms. It is this involutive process that is the agent of the act—one that is not necessarily human. It should not be confused for the improviser’s self, which is rather the greatest obstacle to the emergence of the act. The improviser must be prepared to act as an agent—in the sense in which one acts as a covert operative—on behalf of whatever mechanisms are capable of effecting the acceleration or confrontation required for releasing the act (My emphasis)

The claim that there is a potential act needing to be “released” in a given music setting might appear to impute rule-like structure or normativity to the improvising situation: something that ought to be. However, this claim does not cohere well with context sensitivity and underdetermination of musical expectation described in the previous section.

So what, then, is the nature of the paradoxically compelling, selfless freedom that falls out of this interaction between pattern recognizers, pattern generators and effectors? If we exorcise the specters of transcendental thought –Brassier’s own normative functionalism included – how, if at all, do we conceptualise he calls “the subjectivity of the act” or its “self-determination”?

I think clues about this selfless self-determination can be gleaned from improvising situations we know about. The real of the improvising situation might be all protean complexity, but as with other aspects of the world, we have techniques for coping with that complexity. And these work (more or less).

For example, in a field study of post-hardcore rock musicians, Alec McGuiness provides a vivid example of musicians using a procedural learning technique to prime a series of musical riffs over which their intentional control is fairly limited. Songs are built by associating riffs with riffs, but, as one informant explains, are varied in performance when it “feels right” to do so:

[S]ometimes there’ll be moments when we’re not looking at each other but all four will either hit that heavy thing, or really bring it down […] And yeah, those moments […].. it’s priceless, when everyone just hits the same thing at the same time. […] That’s when you know that that song’s definitely going to work. ‘Cause it’s obviously sort of pressing the same buttons on each of us at the same time. (McGuiness 2009: 19)

So, here, releasing the act involves a distributed response to a “felicitous performance”. This is a collective judgement expressed though performance act itself rather than by application of formal musical rules (of which the performers are largely innocent in any case).

The phenomenology of this act is also dark. All experience is, I have argued elsewhere, striated with “darkness” (Roden 2013; Roden 2014 Ch. 4). Having it only affords a very partial insight into its nature.   We are not normally aware of it because, as Bakker writes, consciousness “provides no information about the absence of information.” Experience seems like a gift because we are unmindful of the heavy lifting required to produce it. We are in the dark about the dark.

The “state of grace” felt in felicitous improvisation is, then, an artifact of our technical underdevelopment. A technics like chaining riffs enables a groove but not groove control. It allows us, in Brassier words, to do “something with time” even as time “does something with us” (2014: 469).

However, if this is self-determination but not rule-governed rationality, what is it? I think we can understand this better by utilizing a conception of autonomy that is not exclusive to discursive creatures (as is the case with Kantian or Sellarsian self-determination).

In Posthuman Life, I call this “functional autonomy”. This idea helps articulate an unbounded speculative posthumanism because it applies to any self-maintaining system capable of enlisting values for its functionings or of becoming a value for some wider assemblage. A functionally autonomous system might be discursive and social; it might be a superintelligent but asocial singleton that only wants to produce paperclips. It might be something whose existence is utterly inconceivable to us, like a computational megastructure leeching the energy output of an entire star.

A diminution of functional autonomy is a reduction in power. Arthritis of the limbs painfully reduces freedom of movement and thus the ability to cultivate agency in other ways. Acquiring new skills increases “one’s capacities to affect and be affected, or to put it differently, increase one’s capacities to enter into novel assemblages” (DeLanda 2006: 50).

To be sure, success at improvising is not like acquiring a new skill. However, it requires that the agent embraces and is embraced a reality and time that interrupts any settled structure of values and ends.

This embrace might seem atavistic, divorced from the Promethean prospectus for engineering nature in compliance to reason. But this assumes that the means for engineering nature are compliant. In Posthuman Life I argue, to the contrary, that the systemic complexity of modern technique precludes binding technologies to norms. Modern self-augmenting technical systems are so complex as to be both out of control and characterised by massive functional indeterminacy – rendering them independent of any rules of use.

As the world is re-made by this vast planetary substance any agent located in the system needs to maximize its own ability to acquire new ends and purposes or bet (against the odds) on stable environments or ontological quiescence. Any technology liable to increase our ability to accrue new values and couplings in anomalous environments, then, is of local ecological value (Roden 2014: Ch. 7).[7] This is not because such technologies make us better or happier, but because the only viable response to this deracinative modernity is more of the same.

In this “posthuman predicament” agency must be febrile, even masochistic. The agent must tolerate and practice a systemic violence against itself and its world. Thus improvisation – because it necessarily embraces and is embraced by the involuted mechanisms of performance – rehearses our tryst with the ontological violence of the hypermodern.



Bakker, Scott. 2010. Neuropath. New York: Tor.

Bakker, Scott. 2014. The Blind Mechanic II: Reza Negarestani and the Labor of Ghosts | Three Pound Brain. Retrieved April 30, 2014, from https://rsbakker.wordpress.com/2014/04/13/the-blind-mechanic-ii-reza-negarestani-and-the-labour-of-ghosts

Beaty, R. E. (2015). The neuroscience of musical improvisation. Neuroscience & Biobehavioral Reviews, 51, 108-117

Brandom, R. 1994. Making it Explicit: Reasoning, representing, and discursive commitment. Harvard university press.

Brandom, R. 2001. Articulating Reasons: An Introduction to Inferentialism. Cambridge Mass.: Harvard University Press.

Brassier, Ray & Rychter, Marcin (2011).” I Am a Nihilist Because I Still Believe in Truth”. Kronos (March). http://www.kronos.org.pl/index.php?23151,896 (Accessed 9 May 2015).

Brassier, R. 2011b. “The View from Nowhere”. Identities: Journal for Politics,

Gender and Culture 17: 7–23.

Brassier, Ray 2013a. “Unfree Improvisation/Compulsive Freedom”, http://www.mattin.org/essays/unfree_improvisation-compulsive_freedom.htm (Accessed March 2015)

Brassier, Ray. 2013b. “Nominalism, Naturalism, and Materialism: Sellars’ Critical Ontology”. In Bana Bashour & Hans D. Muller (eds.), Contemporary Philosophical Naturalism and its Implications. Routledge. 101-114.

Brassier, Ray (2014). “Prometheanism and its Critics”. In R. Mackaey and Avenessian (eds.) #Accelerate: the Accelerationist Reader (Falmouth: Urbanomic), 467-488.

Budd, M. (2001). The Pure Judgement of Taste as an Aesthetic Reflective Judgement. The British Journal of Aesthetics, 41(3), 247-260.

Hickey-Moody, A. 2009. “Little War Machines: Posthuman Pedagogy and Its Media”. Journal of Literary & Cultural Disability Studies 3(3): 273–80.

Huron, D. B. 2006. Sweet anticipation: Music and the psychology of expectation. (MIT press).

Ireland, Amy. 2014. “Noise: An Ontology of the Avant-garde” https://www.academia.edu/3690573/Noise_An_Ontology_of_the_Avant-Garde (retrieved 30th April 2015)

Johnson, Julian, 1998. “The Nature of Abstraction: Analysis and the Webern Myth”, Music Analysis, Vol. 17, No. 3, pp. 267-280.

Limb, C. J., & Braun, A. R. (2008). Neural substrates of spontaneous musical performance: An fMRI study of jazz improvisation. PLoS One, 3(2), e1679.

McGuiness, A. 2009. Mental and motor representation for music performance (Doctoral dissertation, The Open University).

Proulx, Jeremy (forthcoming). “Nature, Judgment and Art: Kant and the Problem of Genius”. Kant Studies Online.

Roden, David 2013. “Nature’s Dark Domain: An Argument for a Naturalised Phenomenology”. Royal Institute of Philosophy Supplements 72: 169–88.

Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.

Roden Forthcoming. “On Reason and Spectral Machines: an anti-normativist response to bounded posthumanism” Forthcoming in Rosi Braidotti Rick Dolphijn (ed.), Philosophy After Nature.

Rosenberg, Martin E. 2010. “Jazz and Emergence (Part One).” Inflexions 4, “Transversal Fields of Experience”: 183-277. www.inflexions.org

Shaviro, Steven. 2015. Allie X, “Catch”. http://www.shaviro.com/Blog/?p=1287 (accessed 6 May 2015)

[1] These determine how one ought to move from one position in the game to another (language-transition rules), assumes an “initial position” in the game – say, by observing some state of the world (language-entry rules) or exits the game by intentionally altering a bit of the world (Sellars 1954, 1974). In the case of assertions, the language-transition rules correspond to principles of material inference such as that allowing us to move to x is coloured from x is red. Language-entry rules, on the other hand, are non-inferential since they are made on the basis of reliable dispositions to discriminate the world (Sellars 1954: 209-10). As Robert Brandom puts it, statements like “This is red” (uttered in response to red things) are “noninferentially elicited but inferentially articulated” (Brandom 1994: 235, 258).

[2] For example, Robert Brandom cites the conditional (if… then…_statement as “the paradigm of a locution that permits one to make inferential commitments explicit as the content of judgements” (Brandom 29914: 109)

[3] Compositional prescriptions are regularly honored in the breach: “For hundreds of years musicians have been taught that it is good to resolve a large leap with a step in the other direction. Surely at least some composers followed this advice? The statistical results from von Hippel and Huron imply that for each passage where a composer had intentionally written according to post-skip reversal, then they must have intentionally transgressed this principle in an equivalent number of passages. Otherwise the statistics would not work out.” (David Huron, Sweet Anticipation: Music and the Psychology of Expectation, p. 84-6). However, the post-skip reversal heuristic is, it seems, applied by musician listeners, which makes sense given that it is easier to apply than a more accurate regression to the mean heuristic – which would require the listener to constantly infer the range (tessitura) of the melody (Ibid.).


[4] “Improvisation makes the performer alive in the moment; it brings one to a state of alertness, even what Ian Carr in his biography of Keith Jarrett has called the ‘state of grace’. This state is enhanced in a group situation of interactive empathy. But all players, except those in a large orchestra, have choices inviting spontaneity at the point of performance. These begin with the room in which they are playing, its humidity and temperature, who they are playing with, and so on.” (183)

[5] As she writes: “Looking from the inside out, the transcendental conditioning of experience establishes clarity by admitting certain contents of an unknowable site of primary production; yet from the outside in, the transcendental conditioning of experience is itself a degenerative noise that degrades the clarity of its external input, rendering it unintelligible and ultimately inaccessible to internal modes of apprehension. What, for the observer-as-subject is clarity, for the observer-as-object is noise. As Niklaus Luhman once remarked: ‘Reality is what one does not perceive when one perceives it’ “ (Ireland 2014)


[6] For example, while improvising in the first eight bars of Miles Davis’ “So What” I might decide (more or less consciously) to play some digital patterns in the home key then transpose these up a minor third. I might have a conception of how I’ll land in the home key of Dm: say by transposing down a tone to Eflatm, resolving to Dm with in a semi-tone descent. This will leave much to be resolved on the fly as my body engages the keys. What patterns will I employ? Will they be varied melodically or rhythmically during the root movement from D to F to E flat? Will they employ chromatic elements (outside the related modes) that further muddy the sense of tonality; will I (at the last) refrain from taking that timid semi-tone resolution, instead repeating or varying the modulation into more harmonically ambiguous terrain?

[7] For example, space technology, nanotechnology, or the use of brain computer interfaces –

Tagged with:

Justin Novak -- from 'Disfigurines' series

Steve Fuller has a wildly provocative article over at IEET entitled “We May Look Crazy to Them, But They Look Like Zombies to Us: Transhumanism as a Political Challenge”

As the title suggests, the article seeks to portray the political challenge of transhumanism as an existential conflict between transhumanists (who are committed to indefinite life extension) and a bioconservative hoi polloi who believe:

  1. that they will live no more than 100 years and quite possibly much less.
  2. that this limited longevity is not only natural but also desirable, both for themselves and everyone else.
  3. that the bigger the change, the more likely the resulting harms will outweigh the benefits.

Fuller’s argument goes as follows:

i) Biocons are comprehensively wrong. 1, 2 and 3 are false (Transhumanist assumption)

ii) The Biocons are thus programmed for destruction – not only their own but ours.

iii) The Biocons are thus relevantly similar to zombies.

Or to employ Fuller’s overlit prose:

These are people who live in the space of their largely self-imposed limitations, which function as a self-fulfilling prophecy. They are programmed for destruction – not genetically but intellectually. Someone of a more dramatic turn of mind would say that they are suicide bombers trying to manufacture a climate of terror in humanity’s existential horizons. They roam the Earth as death-waiting-to-happen.

 This much is clear: If you’re a transhumanist, ordinary people are zombies.

It follows that, for transhumanists,the  zombie apocalypse is an ongoing political reality and a substantial proportion of those reading this are its benighted vectors. Fuller derives only three political options from extant zombie survival guides:

a) You kill [the zombies] once and for all

b) You avoid them.

c) You enable them to be fully alive.

All three have their costs, but a) is, in many ways, the most attractive. After all, b) may be just too resource intensive, while c) is similarly problematic. As Fuller concludes:

Here there is a serious public relations problem, one not so different from development aid workers trying to persuade ‘underdeveloped’ peoples that their lives would be appreciably improved by allowing their societies to be radically re-structured so as to double their life expectancy from 40 to 80. While such societies are by no means perfect and may require significant change to deliver what they promise their members, nevertheless the doubling of life expectancy would mean a radical shift in the rhythm of their individual and collective life cycles – which could prove quite threatening to their sense of identity.

Of course, the existential costs suggested here may be overstated, especially in a world where even poor people have decent access to more global trends. Nevertheless the chequered history of development aid since the formal end of Imperialism suggests that there is little political will – at least on the part of Western nations — to invest the human and financial capital needed to persuade people in developing countries that greater longevity is in their own long-term interest, and not simply a pretext to have them work longer for someone else.

I think there’s scope for a transhumanist critique of the Zombie Argument and a posthumanist critique. I’ll say more about the former than the latter in what follows since Fuller’s piece is largely directed at a transhumanist constituency rather than a posthumanist one.

Suppose we understand transhumanism (H+) as a kind of humanism with added gizmos (or control knobs). Then (as I’ve argued in Posthuman Life) H+ is minimally committed to traditional humanist values: in particular, the cultivation of autonomy and rationality. We may construe autonomy as a matter of degree. A person is more autonomous, the more their range of worthwhile choices increases.

A commitment to autonomy seems like a good way to support H+ since increasing our powers to modify nature and ourselves will plausibly increase the ambit of our worthwhile choices. It will make us more autonomous. (We may even add a rider that the cultivation of any power implies a commitment on the part of rational beings to its open-ended extension)

Now I take it that a commitment to rationality includes a commitment to some form of public reason and accountability. I’m not excluding the possibility of emancipatory political violence here,  but the rationale for violence must be genuinely emancipatory and framed in terms that could enlist the support of reasonable interlocutors in the game of “giving and asking for reasons”. A commitment to public reason implies a commitment to the politics of recognition: treating others as rational subjects capable of being swayed by the better argument while being reciprocally committed to abandoning one’s claims in the light of persuasive counter arguments. To use Rawlsian terminology, transhumanism has a political as well as a comprehensive component. The political component provides a side constraint on the way in which its comprehensive aims (life extension, intelligence augmentation, etc.) can be promulgated.

I don’t think Fuller’s zombie argument can pass through this political filter. Not only does it assume that other interlocutors are comprehensively wrong, it portrays them as essentially wrong. Non-transhumanists are not fellow people whose reason one can appeal to, but zombies, a plague on the Earth.

Casting one’s political opponents in this way isn’t humanism with control knobs; it’s anti-humanist zealotry with control-knobs. For a humanist, it constitutes a political betrayal of the project of humanism that transhumanists hope to continue.

This holds even for those who accept the conclusion of the zombie argument but opt for persuasion. If we fail to engage with others as rational beings, we’re betraying the core commitments of humanism and foundering into irrational violence. So the zombie argument not only begs the question too strongly in favour of transhumanism, it is pragmatically self-vitiating because it fails the public reason test.

Having set out the bones of a transhumanist rebuttal of Fuller, I’ll content myself with a brief sketch of a posthumanist one. The Speculative Posthumanism that I’ve espoused in Posthuman Life is characterised by a position I call Anthropologically Unbounded Posthumanism (AUP). AUP holds that the space of possible agents is not bound (a priori) by conditions of human agency or society. Since we lack future-proof knowledge of possible agents AUP allows that the results of techno-political interventions could be weird in ways that we are not in a position to imagine (Roden 2014: Ch.3-4; 2013; forthcoming). AUP note is an epistemic position that is consonant with some of the claims of critical posthumanists, but also with forms of naturalism and speculative realism.

The ethical predicament of the Speculative Posthumanist is (as I’ve emphasised elsewhere) more complex than that of the Transhumanist or their Promethean and Accelerationist cousins (Roden 2014, Chapters 1-2; Brassier 2014). Given AUP there need be no structure constitutive of all subjectivity or agency. Thus she cannot appeal to an unbounded theory of rational subjectivity to support an ethics of becoming posthuman.

It doesn’t follow that SP implies the rejection of the transhumanist objection. It holds locally for beings of a kind for which the politics of recognition makes sense (e.g. as long as we’re not Jupiter Brains or swarm intelligences). But whether or not this is true, AUP seems to go with a far more pluralist value theory than H+. If we have no a priori grip on the kind of agents that might result from some iteration of future technical activity, we have no grip on what will be important to them. Would life extension make sense to a being that lacked a conception of its self as a persistent agent? We might think that such a being could not be a candidate for properly posthuman status, but I’ve adduced plenty of arguments in PHL and elsewhere to undermine this intuition. In addition, AUP is consistent with multiple posthuman becomings, some of which may involve quite subtle adjustments to gender identity, sexuality, embodiment, and phenomenology. These may or may not involve life extension. In fact it does not seem irrational to adopt certain forms of posthumanist alteration in the knowledge that one’s life might be shortened by so doing (space colonisation, anyone?).

So AUP tells against the claim that only one position regarding life-extension is the right one. It doesn’t preclude the project of life-extension either, but provides strong supplementary grounds for not portraying our Biocon friends as zombies.


Brassier, Ray (2014). “Prometheanism and its Critics”. In R. Mackaey and Avenessian (eds.) #Accelerate: the Accelerationist Reader (Falmouth: Urbanomic), 467-488.

Roden, David 2013. “Nature’s Dark Domain: An Argument for a Naturalised Phenomenology”. Royal Institute of Philosophy Supplements 72: 169–88.

Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.

Roden Forthcoming. “On Reason and Spectral Machines: an anti-normativist response to bounded posthumanism” Forthcoming in Rosi Braidotti Rick Dolphijn (ed.), Philosophy After Nature.


Tagged with:

Conversations On TechNoBody

On March 24, 2015, in Uncategorized, by enemyin1

A series of interviews discussing the recent TechnoBody exhibition.

Part of Anti-Utopias’ digital art series.

Tagged with:


Ray Brassier’s  “Unfree Improvisation/Compulsive Freedom” (written for the 2013 event at Glasgow’s Tramway Freedom is a Constant Struggle) is a terse but insightful discussion of the notion of freedom in improvisation.

It begins with a polemic against the voluntarist conception of freedom. The voluntarist understands free action as the uncaused expression of a “sovereign self”. Brassier rejects this supernaturalist understanding of freedom. He argues that we should view freedom not as determination of an act from outside the causal order, but as the self-determination of action within the causal order.

According to Brassier, this structure is reflexive. It requires, first of all, a system that acts in conformity to rules but is capable of representing and modifying these rules with implications for its future behaviour. Insofar as there is a “subject” of freedom, then, it is not a “self” but depersonalized acts generated by systems capable of representing and intervening in the patterns that govern them.

The act is the only subject. It remains faceless. But it can only be triggered under very specific circumstances. Acknowledgement of the rule generates the condition for deviating from or failing to act in accordance with the rule that constitutes subjectivity. This acknowledgement is triggered by the relevant recognitional mechanism; it requires no appeal to the awareness of a conscious self….

Brassier’s proximate inspiration for this model of freedom is Wilfred Sellars’ account of linguistic action in “Some Reflections on Language Games” (1954) and the psychological nominalism in which it is embedded. This distinguishes a basic rule-conforming level from a metalinguistic level in which it is possible to examine the virtues of claims, inferences or the referential scope of terms by semantic ascent: “Intentionality is primarily a property of candid public speech established via the development of metalinguistic resources that allows a community of speakers to talk about talk” (Brassier 2013b: 105; Sellars 1954: 226).

So, for Brassier, the capacity to explore the space of possibilities opened up by rules presupposes a capacity to acknowledge these sources of agency.

There are some difficult foundational questions that could be raised here. Is thought really instituted by linguistic rules or is language an expression of pre-linguistic intentional contents? Are these rules idiomatic (in the manner of Davidson’s passing theories) or communal? What is the relationship between the normative dimension of speech and thought and facts about what thinkers do or are disposed to do?

I’ve addressed these elsewhere, so I won’t belabor them here. My immediate interest, rather, is the extent to which Brassier’s account of act-reflexivity is applicable to musical improvisation.

Brassier does not provide a detailed account of its musical application in “Unfree Improvisation”. What he does write, though, is highly suggestive: implying that the act of free improvisation requires some kind of encounter between rule governed rationality and more idiomatic patterns or causes:

The ideal of “free improvisation” is paradoxical: in order for improvisation to be free in the requisite sense, it must be a self-determining act, but this requires the involution of a series of mechanisms. It is this involutive process that is the agent of the act—one that is not necessarily human. It should not be confused for the improviser’s self, which is rather the greatest obstacle to the emergence of the act.

In (genuinely) free improvisation, it seems, determinants of action become “for themselves” They enter into the performance situation as explicit possibilities for action.

This seems to demand that “neurobiological or socioeconomic determinants of musical or non-musical action can become musical material, to be manipulated or altered by performers. How is this possible?

Moreover, is there something about improvisation (as opposed to conventional composition) that is peculiarly apt for generating the compulsive freedom of which Brassier speaks?

After all, his description of the determinants of action in the context of improvisation might apply to the situation of the composer as well. The composer of notated “art music” or the studio musician editing files in a digital-audio workstation seems better placed than the improviser to reflect on and develop her musical rule-conforming behaviour (e.g. exploratory improvisations) than the improviser. She has the ambit to explore the permutations of a melodic or rhythmic fragment or to eliminate sonic or gestural nuances that are, in hindsight, unproductive. The composed gesture is always open to reversal or editing and thus to further refinement.

Thus the improviser seems committed to what Andy Hamilton calls an “aesthetic of imperfection” – in contrast to the musical perfectionism that privileges the realized work. Hamilton claims that the aesthetics of perfection implies and is implied by a Platonic account for which the work is only contingently associated with particular times, places or musical performers (Hamilton 2000: 172). The aesthetics of imperfection, by contrast, celebrates the genesis of a performance and the embodying of the performer in a specific time and space:

Improvisation makes the performer alive in the moment; it brings one to a state of alertness, even what Ian Carr in his biography of Keith Jarrett has called the ‘state of grace’. This state is enhanced in a group situation of interactive empathy. But all players, except those in a large orchestra, have choices inviting spontaneity at the point of performance. These begin with the room in which they are playing, its humidity and temperature, who they are playing with, and so on. (183)

An improvisation consists of irreversible acts that cannot be compositionally refined. They can only be repeated, developed or overwritten in time. It takes place in a time window limited by the memory and attention of the improviser, responding to her own playing, to the other players, or (as Brassier recognises) to the real-time behaviour of machines such as effects processors or midi-filters. Thus the aesthetic importance of the improvising situation seems to depend on a temporality and spatiality that distinguishes it from the score-bound composition or studio bound music production.

Yet, if this is right, it might appear to commit Brassier to a vitalist or phenomenological conception of the lived musical experience foreign to the anti-vitalist, anti-phenomenological tenor of his wider philosophical oeuvre. For this open, processual time must be counter-posed to the Platonic or structuralist ideal of the perfectionist. The imperfection and open indeterminacy of performance time must have ontological weight and insistence if Brassier’s programmatic remarks are to have any pertinence to improvisation as opposed to traditional composition.

This is not intended to be a criticism of Brassier’s position but an attempt at clarification. This commitment to an embodied, historical, machinic and physical temporality seems implicit in the continuation of the earlier passage cited from his text:

The improviser must be prepared to act as an agent—in the sense in which one acts as a covert operative—on behalf of whatever mechanisms are capable of effecting the acceleration or confrontation required for releasing the act. The latter arises at the point of intrication between rules and patterns, reasons and causes. It is the key that unlocks the mystery of how objectivity generates subjectivity. The subject as agent of the act is the point of involution at which objectivity determines its own determination: agency is a second-order process whereby neurobiological or socioeconomic determinants (for example) generate their own determination. In this sense, recognizing the un-freedom of voluntary activity is the gateway to compulsive freedom.

The improvising subject, then, is a process in which diverse processes are translated into a musical event or text that retains an expressive trace of its historical antecedents. As Brassier emphasizes, this process need not be understood in terms of human phenomenological time constrained by the “reverbations” of our working memory (Metzinger 2004: 129) – although this may continue to be the case in practice.

The Derridean connotations of the conjunction “event”/”text”/”trace” are deliberate, since the time of the improvising event is singular and productive – open to multiple repetitions that determine it in different ways. Improvisation is usually constrained (if not musically, by time or technical skill or means) but these rarely constitute rules or norms in the conventional sense. There is no single way in which to develop a simple Lydian phase on a saxophone, a rhythmic cell, or sample (an audio sample could be filtered, reversed or mangled by reading its entries out of order with a non-standard function, rather than the usual ramp). So the time of improvisation is a peculiarly naked exposure to “things”. Not to a sensory or categorical given, but precisely to an absence of a given that can be technologically remade.


Brassier, Ray 2013a. “Unfree Improvisation/Compulsive Freedom”, http://www.mattin.org/essays/unfree_improvisation-compulsive_freedom.html (Accessed March 2015)

Brassier, Ray. 2013b. “Nominalism, Naturalism, and Materialism: Sellars’ Critical Ontology”. In Bana Bashour & Hans D. Muller (eds.), Contemporary Philosophical Naturalism and its Implications. Routledge. 101-114.

Davidon, Donald. 1986. “A Nice Derangement of Epitaphs”. In Truth and Interpretation,

E. LePore (ed.), 433–46. Oxford: Blackwell.

Hamilton, A. (2000). “The art of Improvisation and the Aesthetics of Imperfection”. British Journal of Aesthetics 40 (1):168-185.

Metzinger, T. 2004. Being No One: The Self-Model Theory of Subjectivity. Cambridge, MA: MIT Press.

Sellars, W. 1954. “Some Reflections on Language Games”. Philosophy of Science 21 (3):204-228.


Tagged with:

Scott on the End Times

On November 25, 2014, in Uncategorized, by enemyin1

There’s a lively debate around Scott Bakker’s recent lecture: “The End of the World As We Know It: Neuroscience and the Semantic Apocalypse” given at The University of Western Ontario’s Centre for the Study of Theory and Criticism here at Speculative Heresy.  The text includes responses from Nick Srnicek and Ali McMillan.



On November 29, 2013, in Uncategorized, by enemyin1

Nature, has just published a dark philosophical tale by leading philosopher of mind Eric Schwitzgebel and Three Pound Brainer Scott Bakker. Enjoy!

Accelerationism and Posthumanism II

On November 21, 2013, in Uncategorized, by enemyin1

Hulk smash GodAccelerationism combines a transhumanist techno-optimism with a Marxist analysis of the dynamic between the relations and forces of production. Its proponents argue that under capitalism, modern technology is constrained by myopic and socially destructive goals. They argue that rather than abandoning technological modernity for illusory homeostatic Eden we should exploit and ramp up its incendiary potential in order to escape from the gravity well of market dominated resource-allocation. Like posthumanism, however, Accelerationism comes in several flavours. Benjamin Noys (who coined the term) first identified Accelerationism as a kind of overkill politics invested in freeing the machinic unconscious described in the libidinal postructuralisms of Lyotard and Deleuze from the domestication of liberal subjectivity and market mechanisms. This itinerary reaches its apogee in the work of  Nick Land who lent the project  a cyberpunk veneer borrowed from the writings of William Gibson and Bruce Sterling.

Land’s Accelerationism aims at the extirpation of humanity in favour of an “abstract plan­et­ary intel­li­gence rap­idly con­struct­ing itself from the bri­c­ol­aged frag­ments of former civil­isa­tions” (Srnicek and Williams 2013).

However, this mirror-shaded beta version has been remodelled and given a new emancipatory focus by writers such as Ray Brassier, Nick Srnicek and Alex Williams (Williams 2013). This “promethean” phase Accelerationism argues that technology should be reinstrumentalized towards a project of “maximal collective self-mastery”.

Promethean Accelerationism certainly espouses the same tactic of exacerbating the disruptive effects of technology, but with the aim of cultivating a more autonomous collective subject. As Steven Shaviro points out in his excellent talk “An Introduction to Accelerationism”, this version replicates orthodox Marxism at the level of both strategy and intellectual justification. Its vision of a rationally-ordered collectivity mediated by advanced technology seems far closer to Marx’s ideas, say, than Adorno’s dismal negative dialectics or the reactionary identity politics that still animates multiculturalist thinking. If technological modernity is irreversible – short of a catastrophe that would render the whole programme moot – it may be the only prospectus that has a chance of working. As Shaviro points out, an incipient accelerationist logic is already at work among communities using free and open-source software like Pd, where R&D on code modules is distributed among skilled enthusiasts rather than professional software houses (Note, that a similar community flourishes around Pd’s fancier commercial cousin, MAX MSP – where supplementary external objects are written by users in C++, Java and Python).

This is a small but significant move away from manufacture dominated by market feedback. We are beginning see similar tendencies in the manufacture of durables and biotech. The era of downloadable things is upon us. In April 2013, a libertarian group calling themselves Defence Distributed announced that they would release the code for “the Liberator”, a gun that can be assembled from layers of plastic in a 3 D printer (currently priced at around $ 8000). The group’s spokesman, Cody Wilson, anticipates an era in which search engines will provide components “for everything from prosthetic limbs to drugs and birth-control devices”.

However, the alarm that the Liberator created in global law-enforcement agencies exemplifies the first of two potential pitfalls for the Promethean accelerationist itinerary. The democratization of technology – enabled by its easy iteration from context to context – does not seem liable to increase our capacity to control its flows and applications; quite the contrary, and this becomes significant when the iterated tech is not just an Max MSP external for randomizing arrays but an offensive weapon, an engineered virus or a powerful AI program.

I’ve argued elsewhere that technology has no essence and no itinerary. In its modern form at least, it is counter-final. It is not in control, but it is not in anyone’s control either, and the developments that appear to make a techno-insurgency conceivable are liable to ramp up its counter-finality. This, note, is a structural feature deriving from the increasing mobility of technique in modernity, not from market conditions. There is no reason to think that these issues would not be confronted by a more just world in which resources were better directed to identifiable social goods.

A second issue is also identified in Shaviro’s follow up discussion over at The Pinocchio Theory: the posthuman. Using a science fiction allegory from a story by Paul De Filippo, Shaviro suggests that the posthuman could be a figure for a decentred, vital mobilization against capitalism: a line of flight which uses the technologies of capitalist domination to develop new forms of association, embodiment and life.

I think this prospectus is inspiring, but it also has moral dangers that Darian Meacham identifies in a paper forthcoming in The Journal of Medicine and Philosophy entitled ‘Empathy and Alteration: The Ethical Relevance of the Phenomenological Species Concept’. Very briefly, Meacham argues that the development of technologically altered descendants of current humans might precipitate what I term a “disconnection” – the point at which some part of the human socio-technical system spins off to develop separately (Roden 2012). I’ve argued that disconnection is multiply realizable – or so far as we can tell. But Meacham suggests that a kind of disconnection could result if human descendants were to become sufficiently alien from us that “we” would no longer have a pre-reflective basis for empathy with them. We would no longer experience them as having our relation to the world or our intentions. Such a “phenomenological speciation” might fragment the notional universality of the human, leading to a multiverse of fissiparous and alienated clades like that envisaged in Bruce Sterling’s novel Schismatrix. A still more radical disconnection might result if super-intelligent AI’s went “feral”. At this point, the subject of history itself becomes fissionable. It is no longer just about “us”. Perhaps Land remains the most acute and intellectually consistent accelerationist after all.


Roden, David 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Ammon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart. Springer Frontiers Collection.

Srnicek, N.and Williams A (2013), #ACCELERATE MANIFESTO for an Accelerationist Politics, http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/

Sterling, Bruce. 1996. Schismatrix Plus. Ace Books.

Williams, Alex, 2013. “Escape Velocities.” E-flux (46). Accessed July 11. http://worker01.e-flux.com/pdf/article_8969785.pdf.












robot pencilIn “The Basic AI Drives” Steve Omohundro has argued there is scope for predicting the goals of post-singularity entities able to modify their own software and hardware to improve their intellects. For example, systems that can alter their software or physical structure would have an incentive to make modifications that would help them achieve their goals more effectively as have humans have done over historical time. A concomitant of this, he argues, is that such beings would want to ensure that such improvements do not threaten their current goals:

So how can it ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit (Omohundro 2008).

I think this assumption of ethical self-transparency is interestingly problematic. Here’s why:

Omohundro requires that there could be internal systems states of post-singularity AI’s whose value content could be legible for the system’s internal probes. Obviously, this assumes that the properties of a piece of hardware or software can determine the content of the system states that it orchestrates independently of the external environment in which the system is located. This property of non-environmental determination is known as “local supervenience” in the philosophy of mind literature. If local supervenience for value-content fails, any inner state could signify different values in different environments. “Clamping” machine states to current values would entail restrictions on the situations in which the system could operate as well as on possible self-modifications.

Local supervenience might well not hold for system values. But let’s assume that it does. The problem for Omohundro is that the relevant inner determining properties are liable to be holistic. The intrinsic shape or colour of an icon representing a station on a metro map is arbitrary. There is nothing about a circle or a squire or the colour blue that signifies “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this (Churchland’s 2012 account of the meaning of prototype vectors in neural networks utilizes this analogy).

The moral of this is that once we disregard system-environment relations, the only properties liable to anchor the content of a system state are its relations to other states of the system. Thus the meaning of an internal state s under some configuration of the system must depend on some inner context (like a cortical map) where s is related to lots of other states of a similar kind (Fodor and Lepore 1992).

But relationships between states of the self-modifying AI systems are assumed to be extremely plastic because each system will have an excellent model of its own hardware and software and the power to modify them (call this “hyperplasticicity”). If these relationships are modifiable then any given state could exist in alternative configurations. These states might function like homonyms within or between languages, having very different meanings in different contexts.

Suppose that some hyperplastic AI needs to ensure a state in one of its its value circuits, s, retains the value it has under the machine’s current configuration: v*. To do this it must avoid altering itself in ways that would lead to s being in an inner context in which it meant some other value (v*) or no value at all. It must clamp itself to those contexts to avoid s assuming v** or v***, etc.

To achieve clamping, though, it needs to select possible configurations of itself in which s is paired with a context c that preserves its meaning.

The problem for the AI is that all [s + c] pairings are yet more internal systems states and any system state might assume different meanings in different contexts. To ensure that s means v* in context c it needs to do to have done to some [s + c] what it had been attempting with s – restrict itself to the supplementary contexts in which [s + c] leads to s having v* as a value and not something else.

Now, a hyperplastic machine will always be in a position to modify any configuration that it finds itself in (for good or ill). So this problem will be replicated for any combination of states [s + c . . . +  . . ..] that the machine could assume within its configuration space. Each of these states will have to be repeatable in yet other contexts, etc. Since concatenation of system states is a system state to which the principle of contextual variability applies, there is no final system state for which this issue does not arise.

Clamping any arbitrary s requires that we have already clamped some undefined set of contexts for s and this condition applies inductively for all system states. So when Omohundro envisages a machine scanning its internal states to explicate their values he seems to be proposing an infinite task has already completed by a being with vast but presumably still finite computational resource.


Block, Ned (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy 10 (1):615-78.

Churchland, Paul. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. MIT Press (MA).

Omohundro, S. M. 2008. “The basic AI drives”. Frontiers in Artificial Intelligence and applications171, 483.



Tagged with:

Braidotti’s Vital Posthumanism

On October 21, 2013, in Uncategorized, by enemyin1

Bt-toxin-crystalsCritical Posthumanists argue that the idea of a universal human nature has lost its capacity to support our moral and epistemological commitments. The sources of this loss of foundational status are multiple according to writers like Donna Haraway, Katherine Hayles (1999), Neil Badmington (2003), Claire Colebrook and Rosi Braidotti. They include post-Darwinian naturalizations of life and mind that theoretically level differences between living and machinic systems and the more intimate ways of enmeshing living entities in systems of control and exploitation that flow from the new life and cognitive sciences. Latterly, writers such as Braidotti and Colebrook have argued that a politics oriented purely towards the rights and welfare of humans is incapable of addressing issues such as climate change or ecological depletion in the anthropocene era in which humans “have become a geological force capable of affecting all life on this planet” (Braidotti 2013: 66).

On the surface, this seems like a hyperbolic claim. If current global problems are a consequence of human regulation or mismanagement, then their solution will surely require human political and technological agency and institutions.

But let’s just assume that there is something to the critical posthumanist’s deconstruction of the human subject and that, in consequence, we can no longer assume that the welfare and agency of human subjects should be the exclusive goal of politics. If this is right, then critical posthumanism needs to do more than pick over the vanishing traces of the human in philosophy, literature and art. It requires an ethics that is capable of formulating the options open to some appropriately capacious political constituency in our supposedly post-anthropocentric age.

Braidotti’s recent work The Posthuman is an attempt to formulate such an ethics. Braidotti acknowledges and accepts the levelling of the status of human subjectivity implied by developments in cognitive science and biology and the “analytic posthumanism” that falls out of this new ontological vision. However, she is impatient with what she perceives as a disabling vacillation and neutrality that easily follows from junking of human subject as the arbiter of the right and the good. She argues that a posthuman ethics and politics need to retain the idea of political subjectivity; an agency capable of constructing new forms of ethical community and experimenting with new modes of being:

In my view, a focus on subjectivity is necessary because this notion enables us to string together issues that are currently scattered across a number of domains. For instance, issues such as norms and values, forms of community bonding and social belonging as well as questions of political governance both assume and require a notion of the subject.

However, according to Braidotti, this is no longer the classical self-legislating subject of Kantian humanism. It is vital, polyvalent connection-maker constituted “in and by multiplicity” – by “multiple belongings”:

The relational capacity of the posthuman subject is not confined within our species, but it includes all non-anthropocentric elements. Living matter – including the flesh – intelligent and self-organizing but it is precisely because it is not disconnected from the rest of organic life.

‘Life’, far from being codified as the exclusive property or unalienable right of one species, the human, over all others or of being sacralised as a pre-established given, is posited as process, interactive and open ended. This vitalist approach to living matter displaces the boundary between the portion of life – both organic and discursive – that has traditionally been reserved for anthropos, that is to say bios, and the wider scope of animal and nonhuman life also known as zoe (Braidotti 2012: 60).

Thus posthuman subjectivity, for Braidotti, is not human but a tendency inherent in human and nonhuman living systems alike to affiliate with other living systems to form new functional assemblages. Clearly, not everything has the capacity to perform every function. Nonetheless, living systems can be co-opted by other systems for functions “God” never intended and Mother Nature never designed them for. As Haraway put it:  ‘No objects, spaces, or bodies are sacred in themselves; any component can be interfaced with any other if the proper standard, the proper code, can be constructed for processing signals in a common language’ (Haraway 1989: 187). There are no natural limits or functions for bodies or their parts, merely patterns of connection and operation that do not fall apart all at once.

Zoe . . . is the transversal force that cuts across and reconnects previously segregated species, categories and domains. Zoe-centered egalitarianism is, for me, the core of the post-anthropocentric turn: it is a materialist, secular, grounded and unsentimental response to the opportunistic trans-species commodification of Life that is the logic of advanced capitalism.

Of course, if anything can be co-opted for any function that its powers can sustain, one might ask how zoe can support a critique of advanced capitalism which, as Braidotti concedes, produces a form of the “posthuman” by radically disrupting the boundaries between humans, animals, species and technique. What could be greater expression of the zoe’s transversal potential than, say, Monsanto’s transgenic cotton Bollgard II? Bollgard II contains genes from the soil bacterium Bacillus thuringiensis that produce a toxin deadly to pests such as bollworm. Unless we believe that there is some Telos inherent to thuringiensis or to cotton that makes such transversal crossings aberrant – which Braidotti clearly does not – there appears to be no zoe-eyed perspective that could warrant her objection. Monsanto’s genetic engineers are just sensibly utilizing possibilities for connection that are already afforded by living systems but which cannot be realized without technological mediation (here via gene transfer technology). If the genes responsible for producing the toxin Bt in thuringiensis did not work in cotton and increase yields it would presumably not be the type used by the majority of farmers today (Ronald 2013).

Cognitive and biological capitalists like Google and Monsanto seem to incarnate the tendencies of zoe – conceived as a generalized possibility of connection – as much as the” not-for-profit” cyborg experimenters like Kevin Warwick or the publicly funded creators of HTML, Dolly the Sheep and Golden Rice. Doesn’t Google show us what a search engine can do?

We could object to Monsanto’s activities on the grounds that it has invidious social consequences or on the grounds that all technologies should be socially rather than corporately controlled. Neither of these arguments are obviously grounded in posthumanism or “zoe-centricism”  – Marxist humanists would presumably agree with the latter claim, for example.

However, we can find the traces of a zoe-centered argument in Deleuzean ethics explored in the essay “The Ethics of Becoming Imperceptible” (Braidotti 2006). This argues for an ethics oriented towards enabling entities to actualize their powers to their fullest “sustainable” extent. A becoming or actualization of power is sustainable if the assemblage or agency exercising it can do so without “destroying” the systems that makes its exercise possible. Thus an affirmative posthuman ethics follows Nietzsche in making it possible for subjects to exercise their powers to the edge but not beyond, where that exercise falters or where the system exercising it falls apart.

To live intensely and be alive to the nth degree pushes us to the extreme edge of mortality. This has implications for the question of the limits, which are in-built in the very embodied and embedded structure of the subject. The limits are those of one’s endurance – in the double sense of lasting in time and bearing the pain of confronting ‘Life” as zoe. The ethical subject is one that can bear this confrontation, cracking up a bit but without having its physical or affective intensity destroyed by it. Ethics consists in re-working the pain into threshold of sustainability, when and if possible: cracking, but holding it, still.

So Capitalism can be criticized from the zoe-centric position if it constrains powers that could be more fully realized in a different system of social organization. For Braidotti, the capitalist posthuman is constrained by the demands of possessive individualism and accumulation.

The perversity of advanced capitalism, and its undeniable success, consists in reattaching the potential for experimentation with new subject formations back to an overinflated notion of possessive individualism . . ., tied  to the profit principle. This is precisely the opposite direction from the non-profit experimentations with intensity, which I defend in my theory of posthuman subjectivity. The opportunistic political economy of bio-genetic capitalism turns Life/zoe – that is to say human and non-human intelligent matter – into a commodity for trade and profit (Braidotti 2013: 60-61).

Thus she supports “non-profit” experiments with contemporary subjectivity that show what “contemporary, biotechnologically mediated bodies are capable of doing” while resisting the neo-liberal appropriation of living entities as tradable commodities.

Whether the constraint claim is true depends on whether an independent non-capitalist posthuman (in Braidotti’s sense of the term) is possible or whether significant posthuman experimentation – particularly those involving sophisticated technologies like AI or Brain Computer Interfaces – will depend on the continued existence of a global capitalist technical system to support it. I admit to being agnostic about this. While modern technologies such as gene transfer do not seem essentially capitalist, there is little evidence to date that a noncapitalist system could develop them or their concomitant forms of hybridized “posthuman” more prolifically.

Nonetheless, there seems to be a significant ethical claim at issue here that can be used independently of its applicability to the critique of contemporary capitalism.

For example, I have recently argued for an overlap or convergence between critical posthumanism and Speculative Posthumanism: the claim that descendants of current humans could cease to be human by virtue of a history of technical augmentation (SP). Braidotti’s ethics of sustainability is pertinent here because SP in its strong form is also post-anthropocentric – it denies that posthuman possibility is structured a priori by human modes of thought or discourse – and because it defines the posthuman in terms of its power to escape from a socio-technical system organized around human-dependent ends (Roden 2012). The technological offspring described by SP will need to be functionally autonomous insofar as they will have to develop their own ends or modes of existence outside or beyond the human space of ends. Reaching “posthuman escape velocity” will require the cultivation and expression of powers in ways that are sustainable for such entities. This presupposes, of course, that we can have a conception of a subject or agent that is grounded in their embodied capacities or powers rather than general principles applicable to human agency. Understanding its ethical valence thus requires an affirmative conception of these powers that is not dependent on overhanging  anthropocentric ideas such as moral autonomy. Braidotti’s ethics of sustainability thus suggests some potentially viable terms of reference for formulating an ethics of becoming posthuman in the speculative sense.


Badmington, N. (2003) ‘Theorizing Posthumanism’, Cultural Critique 53 (Winter): 10-27.

Braidotti, R (2006), ‘The Ethics of Becoming Imperceptible”, in Deleuze and Philosophy, ed. Constantin Boundas, Edinburgh University Press: Edinburgh, 2006, pp. 133-159.

Braidotti, R (2013), The Posthuman, Cambridge: Polity Press.

Colebrook, Claire 2012a.), “A Globe of One’s Own: In Praise of the Flat Earth.” Substance: A Review of Theory & Literary Criticism 41 (1): 30–39.

Colebrook, Claire (2012b.), “Not Symbiosis, Not Now: Why Anthropogenic Change Is Not Really Human.” Oxford Lit Review 34 (2): 185–209.

Haraway, Donna (1989), ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’. Coming to Terms, Elizabeth Weed (ed.), London: Routledge, 173-204.

Hayles, K. N. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Roden, D. (2010). ‘Deconstruction and excision in philosophical posthumanism’. The Journal of Evolution & Technology, 21(1), 27-36.

Roden, D. (2012). ‘The Disconnection Thesis’. In Singularity Hypotheses (pp. 281-298). Springer Berlin Heidelberg.

Roden, D. (2013). ‘Nature’s Dark domain: an argument for a naturalized phenomenology’. Royal Institute of Philosophy Supplement, 72, 169-188.

Roden, R (2014). Posthuman Life: philosophy at the edge of the human. Acumen Publishing.