On November 29, 2013, in Uncategorized, by enemyin1

Nature, has just published a dark philosophical tale by leading philosopher of mind Eric Schwitzgebel and Three Pound Brainer Scott Bakker. Enjoy!

Accelerationism and Posthumanism II

On November 21, 2013, in Uncategorized, by enemyin1

Hulk smash GodAccelerationism combines a transhumanist techno-optimism with a Marxist analysis of the dynamic between the relations and forces of production. Its proponents argue that under capitalism, modern technology is constrained by myopic and socially destructive goals. They argue that rather than abandoning technological modernity for illusory homeostatic Eden we should exploit and ramp up its incendiary potential in order to escape from the gravity well of market dominated resource-allocation. Like posthumanism, however, Accelerationism comes in several flavours. Benjamin Noys (who coined the term) first identified Accelerationism as a kind of overkill politics invested in freeing the machinic unconscious described in the libidinal postructuralisms of Lyotard and Deleuze from the domestication of liberal subjectivity and market mechanisms. This itinerary reaches its apogee in the work of  Nick Land who lent the project  a cyberpunk veneer borrowed from the writings of William Gibson and Bruce Sterling.

Land’s Accelerationism aims at the extirpation of humanity in favour of an ”abstract plan­et­ary intel­li­gence rap­idly con­struct­ing itself from the bri­c­ol­aged frag­ments of former civil­isa­tions” (Srnicek and Williams 2013). However, this mirror-shaded beta version has been remodelled and given a new emancipatory focus by writers such as Ray Brassier, Nick Srnicek and Alex Williams (Williams 2013). This “promethean” phase Accelerationism argues that technology should be reinstrumentalized towards a project of “maximal collective self-mastery”. Promethean Accelerationism certainly espouses the same tactic of exacerbating the disruptive effects of technology, but with the aim of cultivating a more autonomous universal subject. As Steven Shaviro points out in his excellent talk “An Introduction to Accelerationism”, this version replicates orthodox Marxism at the level of both strategy and intellectual justification. Its vision of a rationally-ordered collectivity mediated by advanced technology seems far closer to Marx’s ideas, say, than Adorno’s dismal negative dialectics or the reactionary identity politics that still animates multiculturalist thinking. If technological modernity is irreversible – short of a catastrophe that would render the whole programme moot – it may be the only prospectus that has a chance of working. As Shaviro points out, an incipient accelerationist logic is already at work among communities using free and open-source software like Pd, where R&D on code modules is distributed among skilled enthusiasts rather than professional software houses (Note, that a similar community flourishes around Pd’s fancier commercial cousin, MAX MSP – where supplementary external objects are written by users in C++, Java and Python).

This is a small but significant move away from manufacture dominated by market feedback. We are beginning see similar tendencies in the manufacture of durables and biotech. The era of downloadable things is upon us. In April 2013, a libertarian group calling themselves Defence Distributed announced that they would release the code for a gun, “the Liberator”, which can be assembled from layers of plastic in a 3 D printer (currently priced at around $ 8000). The group’s spokesman, Cody Wilson, anticipates an era in which search engines will provide components “for everything from prosthetic limbs to drugs and birth-control devices”.

However, the alarm that the Liberator created in global law-enforcement agencies exemplifies the first of two potential pitfalls for the promethean accelerationist itinerary. The first is that the democratization of technology – enabled by its easy iteration from context to context – does not seem liable to increase our capacity to control its flows and applications; quite the contrary, and this becomes significant when the iterated tech is not just an Max MSP external for randomizing arrays but an offensive weapon, an engineered virus or a powerful AI program. I’ve argued elsewhere that technology has no essence and no itinerary. In its modern form at least, it is counter-final. It is not in control, but it is not in anyone’s control either, and the developments that appear to make a techno-insurgency conceivable are liable to ramp up its counter-finality. This, note, is a structural feature deriving from the increasing iterability of technique in modernity, not from market conditions. There is no reason to think that these issues would not be confronted by a more just world in which resources were better directed to identifiable social goods.

A second issue is also identified in Shaviro’s follow up discussion over at The Pinocchio Theory: the posthuman. Using a science fiction allegory from a story by Paul De Filippo, Shaviro suggests that the posthuman could be a figure for a decentred, vital mobilization against capitalism: a line of flight which uses the technologies of capitalist domination to develop new forms of association, embodiment and life. I think this prospectus is inspiring, but it also has moral dangers that Darian Meacham identifies in a paper forthcoming in The Journal of Medicine and Philosophy entitled ’Empathy and Alteration: The Ethical Relevance of the Phenomenological Species Concept’. Very briefly, Meacham argues that the development of technologically altered descendants of current humans might precipitate what I term a “disconnection” – the point at which some part of the human socio-technical system spins off to develop separately (Roden 2012). I’ve argued that disconnection is multiply realizable – or so far as we can tell. But Meacham suggests that a kind of disconnection could result if human descendants were to become sufficiently alien from us that “we” would no longer have a pre-reflective basis for empathy with them. We would no longer experience them as having our relation to the world or our intentions. Such a “phenomenological speciation” might fragment the notional universality of the human, leading to a multiverse of fissiparous and alienated clades like that envisaged in Bruce Sterling’s novel Schismatrix. A still more radical disconnection might result if super-intelligent AI’s went “feral”. At this point, the subject of history itself becomes divided. It is no longer just about us. Perhaps Land remains the most acute and intellectually consistent accelerationist after all.

References

Roden, David 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Ammon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart. Springer Frontiers Collection.

Srnicek, N.and Williams A (2013), #ACCELERATE MANIFESTO for an Accelerationist Politics, http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/

Sterling, Bruce. 1996. Schismatrix Plus. Ace Books.

Williams, Alex, 2013. “Escape Velocities.” E-flux (46). Accessed July 11. http://worker01.e-flux.com/pdf/article_8969785.pdf.

 

 

 

 

 

 

 

 

 

 

 

robot pencilIn “The Basic AI Drives” Steve Omohundro has argued there is scope for predicting the goals of post-singularity entities able to modify their own software and hardware to improve their intellects. For example, systems that can alter their software or physical structure would have an incentive to make modifications that would help them achieve their goals more effectively as have humans have done over historical time. A concomitant of this, he argues, is that such beings would want to ensure that such improvements do not threaten their current goals:

So how can it ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit (Omohundro 2008).

I think this assumption of ethical self-transparency is interestingly problematic. Here’s why:

Omohundro requires that there could be internal systems states of post-singularity AI’s whose value content could be legible for the system’s internal probes. Obviously, this assumes that the properties of a piece of hardware or software can determine the content of the system states that it orchestrates independently of the external environment in which the system is located. This property of non-environmental determination is known as “local supervenience” in the philosophy of mind literature. If local supervenience for value-content fails, any inner state could signify different values in different environments. “Clamping” machine states to current values would entail restrictions on the situations in which the system could operate as well as on possible self-modifications.

Local supervenience might well not hold for system values. But let’s assume that it does. The problem for Omohundro is that the relevant inner determining properties are liable to be holistic. The intrinsic shape or colour of an icon representing a station on a metro map is arbitrary. There is nothing about a circle or a squire or the colour blue that signifies “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this (Churchland’s 2012 account of the meaning of prototype vectors in neural networks utilizes this analogy).

The moral of this is that once we disregard system-environment relations, the only properties liable to anchor the content of a system state are its relations to other states of the system. Thus the meaning of an internal state s under some configuration of the system must depend on some inner context (like a cortical map) where s is related to lots of other states of a similar kind (Fodor and Lepore 1992).

But relationships between states of the self-modifying AI systems are assumed to be extremely plastic because each system will have an excellent model of its own hardware and software and the power to modify them (call this “hyperplasticicity”). If these relationships are modifiable then any given state could exist in alternative configurations. These states might function like homonyms within or between languages, having very different meanings in different contexts.

Suppose that some hyperplastic AI needs to ensure a state in one of its its value circuits, s, retains the value it has under the machine’s current configuration: v*. To do this it must avoid altering itself in ways that would lead to s being in an inner context in which it meant some other value (v*) or no value at all. It must clamp itself to those contexts to avoid s assuming v** or v***, etc.

To achieve clamping, though, it needs to select possible configurations of itself in which s is paired with a context c that preserves its meaning.

The problem for the AI is that all [s + c] pairings are yet more internal systems states and any system state might assume different meanings in different contexts. To ensure that s means v* in context c it needs to do to have done to some [s + c] what it had been attempting with s – restrict itself to the supplementary contexts in which [s + c] leads to s having v* as a value and not something else.

Now, a hyperplastic machine will always be in a position to modify any configuration that it finds itself in (for good or ill). So this problem will be replicated for any combination of states [s + c . . . +  . . ..] that the machine could assume within its configuration space. Each of these states will have to be repeatable in yet other contexts, etc. Since concatenation of system states is a system state to which the principle of contextual variability applies, there is no final system state for which this issue does not arise.

Clamping any arbitrary s requires that we have already clamped some undefined set of contexts for s and this condition applies inductively for all system states. So when Omohundro envisages a machine scanning its internal states to explicate their values he seems to be proposing an infinite task has already completed by a being with vast but presumably still finite computational resource.

References

Block, Ned (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy 10 (1):615-78.

Churchland, Paul. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. MIT Press (MA).

Omohundro, S. M. 2008. “The basic AI drives”. Frontiers in Artificial Intelligence and applications171, 483.

 

 

Tagged with:
 

Braidotti’s Vital Posthumanism

On October 21, 2013, in Uncategorized, by enemyin1

Bt-toxin-crystalsCritical Posthumanists argue that the idea of a universal human nature has lost its capacity to support our moral and epistemological commitments. The sources of this loss of foundational status are multiple according to writers like Donna Haraway, Katherine Hayles (1999), Neil Badmington (2003), Claire Colebrook and Rosi Braidotti. They include post-Darwinian naturalizations of life and mind that theoretically level differences between living and machinic systems and the more intimate ways of enmeshing living entities in systems of control and exploitation that flow from the new life and cognitive sciences. Latterly, writers such as Braidotti and Colebrook have argued that a politics oriented purely towards the rights and welfare of humans is incapable of addressing issues such as climate change or ecological depletion in the anthropocene era in which humans “have become a geological force capable of affecting all life on this planet” (Braidotti 2013: 66).

On the surface, this seems like a hyperbolic claim. If current global problems are a consequence of human regulation or mismanagement, then their solution will surely require human political and technological agency and institutions.

But let’s just assume that there is something to the critical posthumanist’s deconstruction of the human subject and that, in consequence, we can no longer assume that the welfare and agency of human subjects should be the exclusive goal of politics. If this is right, then critical posthumanism needs to do more than pick over the vanishing traces of the human in philosophy, literature and art. It requires an ethics that is capable of formulating the options open to some appropriately capacious political constituency in our supposedly post-anthropocentric age.

Braidotti’s recent work The Posthuman is an attempt to formulate such an ethics. Braidotti acknowledges and accepts the levelling of the status of human subjectivity implied by developments in cognitive science and biology and the “analytic posthumanism” that falls out of this new ontological vision. However, she is impatient with what she perceives as a disabling vacillation and neutrality that easily follows from junking of human subject as the arbiter of the right and the good. She argues that a posthuman ethics and politics need to retain the idea of political subjectivity; an agency capable of constructing new forms of ethical community and experimenting with new modes of being:

In my view, a focus on subjectivity is necessary because this notion enables us to string together issues that are currently scattered across a number of domains. For instance, issues such as norms and values, forms of community bonding and social belonging as well as questions of political governance both assume and require a notion of the subject.

However, according to Braidotti, this is no longer the classical self-legislating subject of Kantian humanism. It is vital, polyvalent connection-maker constituted “in and by multiplicity” – by “multiple belongings”:

The relational capacity of the posthuman subject is not confined within our species, but it includes all non-anthropocentric elements. Living matter – including the flesh – intelligent and self-organizing but it is precisely because it is not disconnected from the rest of organic life.

‘Life’, far from being codified as the exclusive property or unalienable right of one species, the human, over all others or of being sacralised as a pre-established given, is posited as process, interactive and open ended. This vitalist approach to living matter displaces the boundary between the portion of life – both organic and discursive – that has traditionally been reserved for anthropos, that is to say bios, and the wider scope of animal and nonhuman life also known as zoe (Braidotti 2012: 60).

Thus posthuman subjectivity, for Braidotti, is not human but a tendency inherent in human and nonhuman living systems alike to affiliate with other living systems to form new functional assemblages. Clearly, not everything has the capacity to perform every function. Nonetheless, living systems can be co-opted by other systems for functions “God” never intended and Mother Nature never designed them for. As Haraway put it:  ‘No objects, spaces, or bodies are sacred in themselves; any component can be interfaced with any other if the proper standard, the proper code, can be constructed for processing signals in a common language’ (Haraway 1989: 187). There are no natural limits or functions for bodies or their parts, merely patterns of connection and operation that do not fall apart all at once.

Zoe . . . is the transversal force that cuts across and reconnects previously segregated species, categories and domains. Zoe-centered egalitarianism is, for me, the core of the post-anthropocentric turn: it is a materialist, secular, grounded and unsentimental response to the opportunistic trans-species commodification of Life that is the logic of advanced capitalism.

Of course, if anything can be co-opted for any function that its powers can sustain, one might ask how zoe can support a critique of advanced capitalism which, as Braidotti concedes, produces a form of the “posthuman” by radically disrupting the boundaries between humans, animals, species and technique. What could be greater expression of the zoe’s transversal potential than, say, Monsanto’s transgenic cotton Bollgard II? Bollgard II contains genes from the soil bacterium Bacillus thuringiensis that produce a toxin deadly to pests such as bollworm. Unless we believe that there is some Telos inherent to thuringiensis or to cotton that makes such transversal crossings aberrant – which Braidotti clearly does not – there appears to be no zoe-eyed perspective that could warrant her objection. Monsanto’s genetic engineers are just sensibly utilizing possibilities for connection that are already afforded by living systems but which cannot be realized without technological mediation (here via gene transfer technology). If the genes responsible for producing the toxin Bt in thuringiensis did not work in cotton and increase yields it would presumably not be the type used by the majority of farmers today (Ronald 2013).

Cognitive and biological capitalists like Google and Monsanto seem to incarnate the tendencies of zoe – conceived as a generalized possibility of connection – as much as the” not-for-profit” cyborg experimenters like Kevin Warwick or the publicly funded creators of HTML, Dolly the Sheep and Golden Rice. Doesn’t Google show us what a search engine can do?

We could object to Monsanto’s activities on the grounds that it has invidious social consequences or on the grounds that all technologies should be socially rather than corporately controlled. Neither of these arguments are obviously grounded in posthumanism or “zoe-centricism”  – Marxist humanists would presumably agree with the latter claim, for example.

However, we can find the traces of a zoe-centered argument in Deleuzean ethics explored in the essay “The Ethics of Becoming Imperceptible” (Braidotti 2006). This argues for an ethics oriented towards enabling entities to actualize their powers to their fullest “sustainable” extent. A becoming or actualization of power is sustainable if the assemblage or agency exercising it can do so without “destroying” the systems that makes its exercise possible. Thus an affirmative posthuman ethics follows Nietzsche in making it possible for subjects to exercise their powers to the edge but not beyond, where that exercise falters or where the system exercising it falls apart.

To live intensely and be alive to the nth degree pushes us to the extreme edge of mortality. This has implications for the question of the limits, which are in-built in the very embodied and embedded structure of the subject. The limits are those of one’s endurance – in the double sense of lasting in time and bearing the pain of confronting ‘Life” as zoe. The ethical subject is one that can bear this confrontation, cracking up a bit but without having its physical or affective intensity destroyed by it. Ethics consists in re-working the pain into threshold of sustainability, when and if possible: cracking, but holding it, still.

So Capitalism can be criticized from the zoe-centric position if it constrains powers that could be more fully realized in a different system of social organization. For Braidotti, the capitalist posthuman is constrained by the demands of possessive individualism and accumulation.

The perversity of advanced capitalism, and its undeniable success, consists in reattaching the potential for experimentation with new subject formations back to an overinflated notion of possessive individualism . . ., tied  to the profit principle. This is precisely the opposite direction from the non-profit experimentations with intensity, which I defend in my theory of posthuman subjectivity. The opportunistic political economy of bio-genetic capitalism turns Life/zoe – that is to say human and non-human intelligent matter – into a commodity for trade and profit (Braidotti 2013: 60-61).

Thus she supports “non-profit” experiments with contemporary subjectivity that show what “contemporary, biotechnologically mediated bodies are capable of doing” while resisting the neo-liberal appropriation of living entities as tradable commodities.

Whether the constraint claim is true depends on whether an independent non-capitalist posthuman (in Braidotti’s sense of the term) is possible or whether significant posthuman experimentation – particularly those involving sophisticated technologies like AI or Brain Computer Interfaces – will depend on the continued existence of a global capitalist technical system to support it. I admit to being agnostic about this. While modern technologies such as gene transfer do not seem essentially capitalist, there is little evidence to date that a noncapitalist system could develop them or their concomitant forms of hybridized “posthuman” more prolifically.

Nonetheless, there seems to be a significant ethical claim at issue here that can be used independently of its applicability to the critique of contemporary capitalism.

For example, I have recently argued for an overlap or convergence between critical posthumanism and Speculative Posthumanism: the claim that descendants of current humans could cease to be human by virtue of a history of technical augmentation (SP). Braidotti’s ethics of sustainability is pertinent here because SP in its strong form is also post-anthropocentric – it denies that posthuman possibility is structured a priori by human modes of thought or discourse – and because it defines the posthuman in terms of its power to escape from a socio-technical system organized around human-dependent ends (Roden 2012). The technological offspring described by SP will need to be functionally autonomous insofar as they will have to develop their own ends or modes of existence outside or beyond the human space of ends. Reaching “posthuman escape velocity” will require the cultivation and expression of powers in ways that are sustainable for such entities. This presupposes, of course, that we can have a conception of a subject or agent that is grounded in their embodied capacities or powers rather than general principles applicable to human agency. Understanding its ethical valence thus requires an affirmative conception of these powers that is not dependent on overhanging  anthropocentric ideas such as moral autonomy. Braidotti’s ethics of sustainability thus suggests some potentially viable terms of reference for formulating an ethics of becoming posthuman in the speculative sense.

References

Badmington, N. (2003) ‘Theorizing Posthumanism’, Cultural Critique 53 (Winter): 10-27.

Braidotti, R (2006), ‘The Ethics of Becoming Imperceptible”, in Deleuze and Philosophy, ed. Constantin Boundas, Edinburgh University Press: Edinburgh, 2006, pp. 133-159.

Braidotti, R (2013), The Posthuman, Cambridge: Polity Press.

Colebrook, Claire 2012a.), “A Globe of One’s Own: In Praise of the Flat Earth.” Substance: A Review of Theory & Literary Criticism 41 (1): 30–39.

Colebrook, Claire (2012b.), “Not Symbiosis, Not Now: Why Anthropogenic Change Is Not Really Human.” Oxford Lit Review 34 (2): 185–209.

Haraway, Donna (1989), ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’. Coming to Terms, Elizabeth Weed (ed.), London: Routledge, 173-204.

Hayles, K. N. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Roden, D. (2010). ‘Deconstruction and excision in philosophical posthumanism’. The Journal of Evolution & Technology, 21(1), 27-36.

Roden, D. (2012). ‘The Disconnection Thesis’. In Singularity Hypotheses (pp. 281-298). Springer Berlin Heidelberg.

Roden, D. (2013). ‘Nature’s Dark domain: an argument for a naturalized phenomenology’. Royal Institute of Philosophy Supplement, 72, 169-188.

Roden, R (2014). Posthuman Life: philosophy at the edge of the human. Acumen Publishing.

 

London “Singularity Hypothesis” Event

On May 12, 2013, in Uncategorized, by enemyin1

 

Video footage of the Singularity Hypothesis event in London yesterday.