Ray Brassier’s “Unfree Improvisation/Compulsive Freedom” (written for the 2013 event at Glasgow’s Tramway Freedom is a Constant Struggle) is a terse but insightful discussion of the notion of freedom in improvisation.
It begins with a polemic against the voluntarist conception of freedom. The voluntarist understands free action as the uncaused expression of a “sovereign self”. Brassier rejects this supernaturalist understanding of freedom. He argues that we should view freedom not as determination of an act from outside the causal order, but as the self-determination of action within the causal order.
According to Brassier, this structure is reflexive. It requires, first of all, a system that acts in conformity to rules but is capable of representing and modifying these rules with implications for its future behaviour. Insofar as there is a “subject” of freedom, then, it is not a “self” but depersonalized acts generated by systems capable of representing and intervening in the patterns that govern them.
The act is the only subject. It remains faceless. But it can only be triggered under very specific circumstances. Acknowledgement of the rule generates the condition for deviating from or failing to act in accordance with the rule that constitutes subjectivity. This acknowledgement is triggered by the relevant recognitional mechanism; it requires no appeal to the awareness of a conscious self….
Brassier’s proximate inspiration for this model of freedom is Wilfred Sellars’ account of linguistic action in “Some Reflections on Language Games” (1954) and the psychological nominalism in which it is embedded. This distinguishes a basic rule-conforming level from a metalinguistic level in which it is possible to examine the virtues of claims, inferences or the referential scope of terms by semantic ascent: “Intentionality is primarily a property of candid public speech established via the development of metalinguistic resources that allows a community of speakers to talk about talk” (Brassier 2013b: 105; Sellars 1954: 226).
So, for Brassier, the capacity to explore the space of possibilities opened up by rules presupposes a capacity to acknowledge these sources of agency.
There are some difficult foundational questions that could be raised here. Is thought really instituted by linguistic rules or is language an expression of pre-linguistic intentional contents? Are these rules idiomatic (in the manner of Davidson’s passing theories) or communal? What is the relationship between the normative dimension of speech and thought and facts about what thinkers do or are disposed to do?
I’ve addressed these elsewhere, so I won’t belabor them here. My immediate interest, rather, is the extent to which Brassier’s account of act-reflexivity is applicable to musical improvisation.
Brassier does not provide a detailed account of its musical application in “Unfree Improvisation”. What he does write, though, is highly suggestive: implying that the act of free improvisation requires some kind of encounter between rule governed rationality and more idiomatic patterns or causes:
The ideal of “free improvisation” is paradoxical: in order for improvisation to be free in the requisite sense, it must be a self-determining act, but this requires the involution of a series of mechanisms. It is this involutive process that is the agent of the act—one that is not necessarily human. It should not be confused for the improviser’s self, which is rather the greatest obstacle to the emergence of the act.
In (genuinely) free improvisation, it seems, determinants of action become “for themselves” They enter into the performance situation as explicit possibilities for action.
This seems to demand that “neurobiological or socioeconomic” determinants of musical or non-musical action can become musical material, to be manipulated or altered by performers. How is this possible?
Moreover, is there something about improvisation (as opposed to conventional composition) that is peculiarly apt for generating the compulsive freedom of which Brassier speaks?
After all, his description of the determinants of action in the context of improvisation might apply to the situation of the composer as well. The composer of notated “art music” or the studio musician editing files in a digital-audio workstation seems better placed than the improviser to reflect on and develop her musical rule-conforming behaviour (e.g. exploratory improvisations) than the improviser. She has the ambit to explore the permutations of a melodic or rhythmic fragment or to eliminate sonic or gestural nuances that are, in hindsight, unproductive. The composed gesture is always open to reversal or editing and thus to further refinement.
Thus the improviser seems committed to what Andy Hamilton calls an “aesthetic of imperfection” – in contrast to the musical perfectionism that privileges the realized work. Hamilton claims that the aesthetics of perfection implies and is implied by a Platonic account for which the work is only contingently associated with particular times, places or musical performers (Hamilton 2000: 172). The aesthetics of imperfection, by contrast, celebrates the genesis of a performance and the embodying of the performer in a specific time and space:
Improvisation makes the performer alive in the moment; it brings one to a state of alertness, even what Ian Carr in his biography of Keith Jarrett has called the ‘state of grace’. This state is enhanced in a group situation of interactive empathy. But all players, except those in a large orchestra, have choices inviting spontaneity at the point of performance. These begin with the room in which they are playing, its humidity and temperature, who they are playing with, and so on. (183)
An improvisation consists of irreversible acts that cannot be compositionally refined. They can only be repeated, developed or overwritten in time. It takes place in a time window limited by the memory and attention of the improviser, responding to her own playing, to the other players, or (as Brassier recognises) to the real-time behaviour of machines such as effects processors or midi-filters. Thus the aesthetic importance of the improvising situation seems to depend on a temporality and spatiality that distinguishes it from the score-bound composition or studio bound music production.
Yet, if this is right, it might appear to commit Brassier to a vitalist or phenomenological conception of the lived musical experience foreign to the anti-vitalist, anti-phenomenological tenor of his wider philosophical oeuvre. For this open, processual time must be counter-posed to the Platonic or structuralist ideal of the perfectionist. The imperfection and open indeterminacy of performance time must have ontological weight and insistence if Brassier’s programmatic remarks are to have any pertinence to improvisation as opposed to traditional composition.
This is not intended to be a criticism of Brassier’s position but an attempt at clarification. This commitment to an embodied, historical, machinic and physical temporality seems implicit in the continuation of the earlier passage cited from his text:
The improviser must be prepared to act as an agent—in the sense in which one acts as a covert operative—on behalf of whatever mechanisms are capable of effecting the acceleration or confrontation required for releasing the act. The latter arises at the point of intrication between rules and patterns, reasons and causes. It is the key that unlocks the mystery of how objectivity generates subjectivity. The subject as agent of the act is the point of involution at which objectivity determines its own determination: agency is a second-order process whereby neurobiological or socioeconomic determinants (for example) generate their own determination. In this sense, recognizing the un-freedom of voluntary activity is the gateway to compulsive freedom.
The improvising subject, then, is a process in which diverse processes are translated into a musical event or text that retains an expressive trace of its historical antecedents. As Brassier emphasizes, this process need not be understood in terms of human phenomenological time constrained by the “reverbations” of our working memory (Metzinger 2004: 129) – although this may continue to be the case in practice.
The Derridean connotations of the conjunction “event”/”text”/”trace” are deliberate, since the time of the improvising event is singular and productive – open to multiple repetitions that determine it in different ways. Improvisation is usually constrained (if not musically, by time or technical skill or means) but these rarely constitute rules or norms in the conventional sense. There is no single way in which to develop a simple Lydian phase on a saxophone, a rhythmic cell, or sample (an audio sample could be filtered, reversed or mangled by reading its entries out of order with a non-standard function, rather than the usual ramp). So the time of improvisation is a peculiarly naked exposure to “things”. Not to a sensory or categorical given, but precisely to an absence of a given that can be technologically remade.
Brassier, Ray 2013a. “Unfree Improvisation/Compulsive Freedom”, http://www.mattin.org/essays/unfree_improvisation-compulsive_freedom.html (Accessed March 2015)
Brassier, Ray. 2013b. “Nominalism, Naturalism, and Materialism: Sellars’ Critical Ontology”. In Bana Bashour & Hans D. Muller (eds.), Contemporary Philosophical Naturalism and its Implications. Routledge. 101-114.
Davidon, Donald. 1986. “A Nice Derangement of Epitaphs”. In Truth and Interpretation,
E. LePore (ed.), 433–46. Oxford: Blackwell.
Hamilton, A. (2000). “The art of Improvisation and the Aesthetics of Imperfection”. British Journal of Aesthetics 40 (1):168-185.
Metzinger, T. 2004. Being No One: The Self-Model Theory of Subjectivity. Cambridge, MA: MIT Press.
Sellars, W. 1954. “Some Reflections on Language Games”. Philosophy of Science 21 (3):204-228.
There’s a lively debate around Scott Bakker’s recent lecture: “The End of the World As We Know It: Neuroscience and the Semantic Apocalypse” given at The University of Western Ontario’s Centre for the Study of Theory and Criticism here at Speculative Heresy. The text includes responses from Nick Srnicek and Ali McMillan.
Accelerationism combines a transhumanist techno-optimism with a Marxist analysis of the dynamic between the relations and forces of production. Its proponents argue that under capitalism, modern technology is constrained by myopic and socially destructive goals. They argue that rather than abandoning technological modernity for illusory homeostatic Eden we should exploit and ramp up its incendiary potential in order to escape from the gravity well of market dominated resource-allocation. Like posthumanism, however, Accelerationism comes in several flavours. Benjamin Noys (who coined the term) first identified Accelerationism as a kind of overkill politics invested in freeing the machinic unconscious described in the libidinal postructuralisms of Lyotard and Deleuze from the domestication of liberal subjectivity and market mechanisms. This itinerary reaches its apogee in the work of Nick Land who lent the project a cyberpunk veneer borrowed from the writings of William Gibson and Bruce Sterling.
Land’s Accelerationism aims at the extirpation of humanity in favour of an “abstract planetary intelligence rapidly constructing itself from the bricolaged fragments of former civilisations” (Srnicek and Williams 2013).
However, this mirror-shaded beta version has been remodelled and given a new emancipatory focus by writers such as Ray Brassier, Nick Srnicek and Alex Williams (Williams 2013). This “promethean” phase Accelerationism argues that technology should be reinstrumentalized towards a project of “maximal collective self-mastery”.
Promethean Accelerationism certainly espouses the same tactic of exacerbating the disruptive effects of technology, but with the aim of cultivating a more autonomous collective subject. As Steven Shaviro points out in his excellent talk “An Introduction to Accelerationism”, this version replicates orthodox Marxism at the level of both strategy and intellectual justification. Its vision of a rationally-ordered collectivity mediated by advanced technology seems far closer to Marx’s ideas, say, than Adorno’s dismal negative dialectics or the reactionary identity politics that still animates multiculturalist thinking. If technological modernity is irreversible – short of a catastrophe that would render the whole programme moot – it may be the only prospectus that has a chance of working. As Shaviro points out, an incipient accelerationist logic is already at work among communities using free and open-source software like Pd, where R&D on code modules is distributed among skilled enthusiasts rather than professional software houses (Note, that a similar community flourishes around Pd’s fancier commercial cousin, MAX MSP – where supplementary external objects are written by users in C++, Java and Python).
This is a small but significant move away from manufacture dominated by market feedback. We are beginning see similar tendencies in the manufacture of durables and biotech. The era of downloadable things is upon us. In April 2013, a libertarian group calling themselves Defence Distributed announced that they would release the code for “the Liberator”, a gun that can be assembled from layers of plastic in a 3 D printer (currently priced at around $ 8000). The group’s spokesman, Cody Wilson, anticipates an era in which search engines will provide components “for everything from prosthetic limbs to drugs and birth-control devices”.
However, the alarm that the Liberator created in global law-enforcement agencies exemplifies the first of two potential pitfalls for the Promethean accelerationist itinerary. The democratization of technology – enabled by its easy iteration from context to context – does not seem liable to increase our capacity to control its flows and applications; quite the contrary, and this becomes significant when the iterated tech is not just an Max MSP external for randomizing arrays but an offensive weapon, an engineered virus or a powerful AI program.
I’ve argued elsewhere that technology has no essence and no itinerary. In its modern form at least, it is counter-final. It is not in control, but it is not in anyone’s control either, and the developments that appear to make a techno-insurgency conceivable are liable to ramp up its counter-finality. This, note, is a structural feature deriving from the increasing mobility of technique in modernity, not from market conditions. There is no reason to think that these issues would not be confronted by a more just world in which resources were better directed to identifiable social goods.
A second issue is also identified in Shaviro’s follow up discussion over at The Pinocchio Theory: the posthuman. Using a science fiction allegory from a story by Paul De Filippo, Shaviro suggests that the posthuman could be a figure for a decentred, vital mobilization against capitalism: a line of flight which uses the technologies of capitalist domination to develop new forms of association, embodiment and life.
I think this prospectus is inspiring, but it also has moral dangers that Darian Meacham identifies in a paper forthcoming in The Journal of Medicine and Philosophy entitled ‘Empathy and Alteration: The Ethical Relevance of the Phenomenological Species Concept’. Very briefly, Meacham argues that the development of technologically altered descendants of current humans might precipitate what I term a “disconnection” – the point at which some part of the human socio-technical system spins off to develop separately (Roden 2012). I’ve argued that disconnection is multiply realizable – or so far as we can tell. But Meacham suggests that a kind of disconnection could result if human descendants were to become sufficiently alien from us that “we” would no longer have a pre-reflective basis for empathy with them. We would no longer experience them as having our relation to the world or our intentions. Such a “phenomenological speciation” might fragment the notional universality of the human, leading to a multiverse of fissiparous and alienated clades like that envisaged in Bruce Sterling’s novel Schismatrix. A still more radical disconnection might result if super-intelligent AI’s went “feral”. At this point, the subject of history itself becomes fissionable. It is no longer just about “us”. Perhaps Land remains the most acute and intellectually consistent accelerationist after all.
Roden, David 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Ammon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart. Springer Frontiers Collection.
Srnicek, N.and Williams A (2013), #ACCELERATE MANIFESTO for an Accelerationist Politics, http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/
Sterling, Bruce. 1996. Schismatrix Plus. Ace Books.
Williams, Alex, 2013. “Escape Velocities.” E-flux (46). Accessed July 11. http://worker01.e-flux.com/pdf/article_8969785.pdf.
In “The Basic AI Drives” Steve Omohundro has argued there is scope for predicting the goals of post-singularity entities able to modify their own software and hardware to improve their intellects. For example, systems that can alter their software or physical structure would have an incentive to make modifications that would help them achieve their goals more effectively as have humans have done over historical time. A concomitant of this, he argues, is that such beings would want to ensure that such improvements do not threaten their current goals:
So how can it ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit (Omohundro 2008).
I think this assumption of ethical self-transparency is interestingly problematic. Here’s why:
Omohundro requires that there could be internal systems states of post-singularity AI’s whose value content could be legible for the system’s internal probes. Obviously, this assumes that the properties of a piece of hardware or software can determine the content of the system states that it orchestrates independently of the external environment in which the system is located. This property of non-environmental determination is known as “local supervenience” in the philosophy of mind literature. If local supervenience for value-content fails, any inner state could signify different values in different environments. “Clamping” machine states to current values would entail restrictions on the situations in which the system could operate as well as on possible self-modifications.
Local supervenience might well not hold for system values. But let’s assume that it does. The problem for Omohundro is that the relevant inner determining properties are liable to be holistic. The intrinsic shape or colour of an icon representing a station on a metro map is arbitrary. There is nothing about a circle or a squire or the colour blue that signifies “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this (Churchland’s 2012 account of the meaning of prototype vectors in neural networks utilizes this analogy).
The moral of this is that once we disregard system-environment relations, the only properties liable to anchor the content of a system state are its relations to other states of the system. Thus the meaning of an internal state s under some configuration of the system must depend on some inner context (like a cortical map) where s is related to lots of other states of a similar kind (Fodor and Lepore 1992).
But relationships between states of the self-modifying AI systems are assumed to be extremely plastic because each system will have an excellent model of its own hardware and software and the power to modify them (call this “hyperplasticicity”). If these relationships are modifiable then any given state could exist in alternative configurations. These states might function like homonyms within or between languages, having very different meanings in different contexts.
Suppose that some hyperplastic AI needs to ensure a state in one of its its value circuits, s, retains the value it has under the machine’s current configuration: v*. To do this it must avoid altering itself in ways that would lead to s being in an inner context in which it meant some other value (v*) or no value at all. It must clamp itself to those contexts to avoid s assuming v** or v***, etc.
To achieve clamping, though, it needs to select possible configurations of itself in which s is paired with a context c that preserves its meaning.
The problem for the AI is that all [s + c] pairings are yet more internal systems states and any system state might assume different meanings in different contexts. To ensure that s means v* in context c it needs to do to have done to some [s + c] what it had been attempting with s – restrict itself to the supplementary contexts in which [s + c] leads to s having v* as a value and not something else.
Now, a hyperplastic machine will always be in a position to modify any configuration that it finds itself in (for good or ill). So this problem will be replicated for any combination of states [s + c . . . + . . ..] that the machine could assume within its configuration space. Each of these states will have to be repeatable in yet other contexts, etc. Since concatenation of system states is a system state to which the principle of contextual variability applies, there is no final system state for which this issue does not arise.
Clamping any arbitrary s requires that we have already clamped some undefined set of contexts for s and this condition applies inductively for all system states. So when Omohundro envisages a machine scanning its internal states to explicate their values he seems to be proposing an infinite task has already completed by a being with vast but presumably still finite computational resource.
Block, Ned (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy 10 (1):615-78.
Churchland, Paul. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. MIT Press (MA).
Omohundro, S. M. 2008. “The basic AI drives”. Frontiers in Artificial Intelligence and applications, 171, 483.
Critical Posthumanists argue that the idea of a universal human nature has lost its capacity to support our moral and epistemological commitments. The sources of this loss of foundational status are multiple according to writers like Donna Haraway, Katherine Hayles (1999), Neil Badmington (2003), Claire Colebrook and Rosi Braidotti. They include post-Darwinian naturalizations of life and mind that theoretically level differences between living and machinic systems and the more intimate ways of enmeshing living entities in systems of control and exploitation that flow from the new life and cognitive sciences. Latterly, writers such as Braidotti and Colebrook have argued that a politics oriented purely towards the rights and welfare of humans is incapable of addressing issues such as climate change or ecological depletion in the anthropocene era in which humans “have become a geological force capable of affecting all life on this planet” (Braidotti 2013: 66).
On the surface, this seems like a hyperbolic claim. If current global problems are a consequence of human regulation or mismanagement, then their solution will surely require human political and technological agency and institutions.
But let’s just assume that there is something to the critical posthumanist’s deconstruction of the human subject and that, in consequence, we can no longer assume that the welfare and agency of human subjects should be the exclusive goal of politics. If this is right, then critical posthumanism needs to do more than pick over the vanishing traces of the human in philosophy, literature and art. It requires an ethics that is capable of formulating the options open to some appropriately capacious political constituency in our supposedly post-anthropocentric age.
Braidotti’s recent work The Posthuman is an attempt to formulate such an ethics. Braidotti acknowledges and accepts the levelling of the status of human subjectivity implied by developments in cognitive science and biology and the “analytic posthumanism” that falls out of this new ontological vision. However, she is impatient with what she perceives as a disabling vacillation and neutrality that easily follows from junking of human subject as the arbiter of the right and the good. She argues that a posthuman ethics and politics need to retain the idea of political subjectivity; an agency capable of constructing new forms of ethical community and experimenting with new modes of being:
In my view, a focus on subjectivity is necessary because this notion enables us to string together issues that are currently scattered across a number of domains. For instance, issues such as norms and values, forms of community bonding and social belonging as well as questions of political governance both assume and require a notion of the subject.
However, according to Braidotti, this is no longer the classical self-legislating subject of Kantian humanism. It is vital, polyvalent connection-maker constituted “in and by multiplicity” – by “multiple belongings”:
The relational capacity of the posthuman subject is not confined within our species, but it includes all non-anthropocentric elements. Living matter – including the flesh – intelligent and self-organizing but it is precisely because it is not disconnected from the rest of organic life.
‘Life’, far from being codified as the exclusive property or unalienable right of one species, the human, over all others or of being sacralised as a pre-established given, is posited as process, interactive and open ended. This vitalist approach to living matter displaces the boundary between the portion of life – both organic and discursive – that has traditionally been reserved for anthropos, that is to say bios, and the wider scope of animal and nonhuman life also known as zoe (Braidotti 2012: 60).
Thus posthuman subjectivity, for Braidotti, is not human but a tendency inherent in human and nonhuman living systems alike to affiliate with other living systems to form new functional assemblages. Clearly, not everything has the capacity to perform every function. Nonetheless, living systems can be co-opted by other systems for functions “God” never intended and Mother Nature never designed them for. As Haraway put it: ‘No objects, spaces, or bodies are sacred in themselves; any component can be interfaced with any other if the proper standard, the proper code, can be constructed for processing signals in a common language’ (Haraway 1989: 187). There are no natural limits or functions for bodies or their parts, merely patterns of connection and operation that do not fall apart all at once.
Zoe . . . is the transversal force that cuts across and reconnects previously segregated species, categories and domains. Zoe-centered egalitarianism is, for me, the core of the post-anthropocentric turn: it is a materialist, secular, grounded and unsentimental response to the opportunistic trans-species commodification of Life that is the logic of advanced capitalism.
Of course, if anything can be co-opted for any function that its powers can sustain, one might ask how zoe can support a critique of advanced capitalism which, as Braidotti concedes, produces a form of the “posthuman” by radically disrupting the boundaries between humans, animals, species and technique. What could be greater expression of the zoe’s transversal potential than, say, Monsanto’s transgenic cotton Bollgard II? Bollgard II contains genes from the soil bacterium Bacillus thuringiensis that produce a toxin deadly to pests such as bollworm. Unless we believe that there is some Telos inherent to thuringiensis or to cotton that makes such transversal crossings aberrant – which Braidotti clearly does not – there appears to be no zoe-eyed perspective that could warrant her objection. Monsanto’s genetic engineers are just sensibly utilizing possibilities for connection that are already afforded by living systems but which cannot be realized without technological mediation (here via gene transfer technology). If the genes responsible for producing the toxin Bt in thuringiensis did not work in cotton and increase yields it would presumably not be the type used by the majority of farmers today (Ronald 2013).
Cognitive and biological capitalists like Google and Monsanto seem to incarnate the tendencies of zoe – conceived as a generalized possibility of connection – as much as the” not-for-profit” cyborg experimenters like Kevin Warwick or the publicly funded creators of HTML, Dolly the Sheep and Golden Rice. Doesn’t Google show us what a search engine can do?
We could object to Monsanto’s activities on the grounds that it has invidious social consequences or on the grounds that all technologies should be socially rather than corporately controlled. Neither of these arguments are obviously grounded in posthumanism or “zoe-centricism” – Marxist humanists would presumably agree with the latter claim, for example.
However, we can find the traces of a zoe-centered argument in Deleuzean ethics explored in the essay “The Ethics of Becoming Imperceptible” (Braidotti 2006). This argues for an ethics oriented towards enabling entities to actualize their powers to their fullest “sustainable” extent. A becoming or actualization of power is sustainable if the assemblage or agency exercising it can do so without “destroying” the systems that makes its exercise possible. Thus an affirmative posthuman ethics follows Nietzsche in making it possible for subjects to exercise their powers to the edge but not beyond, where that exercise falters or where the system exercising it falls apart.
To live intensely and be alive to the nth degree pushes us to the extreme edge of mortality. This has implications for the question of the limits, which are in-built in the very embodied and embedded structure of the subject. The limits are those of one’s endurance – in the double sense of lasting in time and bearing the pain of confronting ‘Life” as zoe. The ethical subject is one that can bear this confrontation, cracking up a bit but without having its physical or affective intensity destroyed by it. Ethics consists in re-working the pain into threshold of sustainability, when and if possible: cracking, but holding it, still.
So Capitalism can be criticized from the zoe-centric position if it constrains powers that could be more fully realized in a different system of social organization. For Braidotti, the capitalist posthuman is constrained by the demands of possessive individualism and accumulation.
The perversity of advanced capitalism, and its undeniable success, consists in reattaching the potential for experimentation with new subject formations back to an overinflated notion of possessive individualism . . ., tied to the profit principle. This is precisely the opposite direction from the non-profit experimentations with intensity, which I defend in my theory of posthuman subjectivity. The opportunistic political economy of bio-genetic capitalism turns Life/zoe – that is to say human and non-human intelligent matter – into a commodity for trade and profit (Braidotti 2013: 60-61).
Thus she supports “non-profit” experiments with contemporary subjectivity that show what “contemporary, biotechnologically mediated bodies are capable of doing” while resisting the neo-liberal appropriation of living entities as tradable commodities.
Whether the constraint claim is true depends on whether an independent non-capitalist posthuman (in Braidotti’s sense of the term) is possible or whether significant posthuman experimentation – particularly those involving sophisticated technologies like AI or Brain Computer Interfaces – will depend on the continued existence of a global capitalist technical system to support it. I admit to being agnostic about this. While modern technologies such as gene transfer do not seem essentially capitalist, there is little evidence to date that a noncapitalist system could develop them or their concomitant forms of hybridized “posthuman” more prolifically.
Nonetheless, there seems to be a significant ethical claim at issue here that can be used independently of its applicability to the critique of contemporary capitalism.
For example, I have recently argued for an overlap or convergence between critical posthumanism and Speculative Posthumanism: the claim that descendants of current humans could cease to be human by virtue of a history of technical augmentation (SP). Braidotti’s ethics of sustainability is pertinent here because SP in its strong form is also post-anthropocentric – it denies that posthuman possibility is structured a priori by human modes of thought or discourse – and because it defines the posthuman in terms of its power to escape from a socio-technical system organized around human-dependent ends (Roden 2012). The technological offspring described by SP will need to be functionally autonomous insofar as they will have to develop their own ends or modes of existence outside or beyond the human space of ends. Reaching “posthuman escape velocity” will require the cultivation and expression of powers in ways that are sustainable for such entities. This presupposes, of course, that we can have a conception of a subject or agent that is grounded in their embodied capacities or powers rather than general principles applicable to human agency. Understanding its ethical valence thus requires an affirmative conception of these powers that is not dependent on overhanging anthropocentric ideas such as moral autonomy. Braidotti’s ethics of sustainability thus suggests some potentially viable terms of reference for formulating an ethics of becoming posthuman in the speculative sense.
Badmington, N. (2003) ‘Theorizing Posthumanism’, Cultural Critique 53 (Winter): 10-27.
Braidotti, R (2006), ‘The Ethics of Becoming Imperceptible”, in Deleuze and Philosophy, ed. Constantin Boundas, Edinburgh University Press: Edinburgh, 2006, pp. 133-159.
Braidotti, R (2013), The Posthuman, Cambridge: Polity Press.
Colebrook, Claire 2012a.), “A Globe of One’s Own: In Praise of the Flat Earth.” Substance: A Review of Theory & Literary Criticism 41 (1): 30–39.
Colebrook, Claire (2012b.), “Not Symbiosis, Not Now: Why Anthropogenic Change Is Not Really Human.” Oxford Lit Review 34 (2): 185–209.
Haraway, Donna (1989), ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’. Coming to Terms, Elizabeth Weed (ed.), London: Routledge, 173-204.
Hayles, K. N. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
Roden, D. (2010). ‘Deconstruction and excision in philosophical posthumanism’. The Journal of Evolution & Technology, 21(1), 27-36.
Roden, D. (2012). ‘The Disconnection Thesis’. In Singularity Hypotheses (pp. 281-298). Springer Berlin Heidelberg.
Roden, D. (2013). ‘Nature’s Dark domain: an argument for a naturalized phenomenology’. Royal Institute of Philosophy Supplement, 72, 169-188.
Roden, R (2014). Posthuman Life: philosophy at the edge of the human. Acumen Publishing.