Kevin has provided a typically engaging gloss on the difference between posthumanism and transhumanism over at the IEET site. I don’t fundamentally disagree with his account of transhumanism (though I think he needs to emphasize its fundamentally normative character) but the account of posthumanism he gives here has some shortcomings:

Two significant differences between transhumanism and the posthuman is the posthuman’s focus on information and systems theories (cybernetics), and the posthuman’s consequent, primary relationship to digital technology; and also the posthuman’s emphasis on systems (such as humans) as distributed entities—that is, as systems comprised of, and entangled with, other systems.  Transhumanism does not emphasize either of these things. 

Posthumanism derives from the posthuman because the latter represents the death of the humanist subject: the qualities that make up that subject depend on a privileged position as a special, stand-alone entity that possesses unique characteristics that make it exceptional in the universe—characteristics such as unique and superior intellect to all other creatures, or a natural right to freedoms that do not accrue similarly to other animals.  If the focus is on information as the essence of all intelligent systems, and materials and bodies are merely substrates that carry the all-important information of life, then there is no meaningful difference between humans and intelligent machines—or any other kind of intelligent system, such as animals. 

Now, I realize we can spin definitions to different ends; but even allowing for our different research aims, this won’t do. Posthumanists may, but need not, claim that humans are becoming more intertwined with technology. They may, but need not, claim that functions, relations or systems are more ontologically basic than intrinsic properties. Many arch-humanists are functionalists, holists or relationists (I Kant, R Brandom, D Davidson, G Hegel . . .) and one can agree that human subjectivity is constitutively technological (A Clark) without denying its distinctive moral or epistemological status. Reducing stuff to relations can be a way of emphasizing the transcendentally constitutive status of the human subject, taking anthropocentrism to the max (see below). Emphasizing the externality or contingency of relations can be a way of arguing that things are fundamentally independent of that constitutive activity (as in Harman’s OOO or DeLanda’s assemblage ontology).

So I raise Kevin’s thumbnails with a few of my own.

  • A philosopher is a humanist if she believes that humans are importantly distinct from non-humans and supports this distinctiveness claim with a philosophical anthropology: an account of the central features of human existence and their relations to similarly general aspects of nonhuman existence.
  • A humanist philosophy is anthropocentric if it accords humans a superlative status that all or most nonhumans lack
  • Transhumanists claims that technological enhancement of human capacities is a desirable aim (all other things being equal). So the normative content of transhumanism is largely humanist. Transhumanists just hope to add some new ways of cultivating human values to the old unreliables of education and politics.
  • Posthumanists reject anthropocentrism. So philosophical realists, deconstructionists, new materialists, Cthulhu cultists and naturalists are posthumanists even if they are unlikely to crop up on one another’s Christmas lists.

For more, see my forthcoming book Posthuman Life and my post Humanism, Transhumanism and Posthumanism.

 

Tagged with:
 

Conferences in September

On June 21, 2014, in Uncategorized, by enemyin1

I’ll be attending conferences at either end of our continent in September:

Philosophy After Nature, University of Utrecht, 3-5 September

Posthuman Politics, 25th until the 28th of September 2014, University of the Aegean, Department of Cultural Technology and Communication, Geography Building – University Campus.

I’m presenting the same paper at both. Here’s the abstract, though the details of the argument remain to be filled in!

On Reason and Spectral Machines: an anti-normativist response to Bounded Posthumanism

David Roden, The Open University UK

In Posthuman Life I distinguish two speculative claims regarding technological successors to current humans: an anthropologically bounded posthumanism (ABP) and an anthropologically unbounded posthumanism. ABP holds:

1) There are transcendental constrains on cognition and agency that any entity qualifying as a posthuman successor under the Disconnection Thesis (Roden 2012, 2014) would have to obey.

2) These constraints are realized in the structure of human subjectivity and rationality.

One version of ABP is implied by normativist theories of intentionality for which original or “first class” intentionality is only possible for beings that can hold one another publicly to account by ascribing and adopting normative statuses (Brandom 1994). If Normativist ABP is correct, then posthumans – were they to exist – would not be so different from us for they would have to belong to discursive communities and subscribe to inter-subjective norms (See Wennemann 2013).

Normativist ABP thus imposes severe constraints on posthuman “weirdness” and limits the political implications of speculative claims about posthuman possibility such as those in my book. In this paper, I will argue that we should reject Normativist ABP because we should reject normativist theories of intentionality. For normativism to work, it must be shown that the objectivity and “bindingness” of social norms is independent of individual beliefs or endorsements. I will argue that the only way in which this can be achieved is by denying the dependence of normative statuses upon the particular dispositions, states and attitudes of individuals; thus violating plausible naturalistic constraints on normativism.

In response, I will argue for an anthropologically unbounded posthumanism for which all constraints on posthuman possibility must be discovered empirically by making posthumans or becoming posthuman. This implies a similarly unbounded posthuman politics for which there is no universal reason or transhistorical subjectivity.

Bibliography

Bakker, Scott. 2014. The Blind Mechanic II: Reza Negarestani and the Labor of Ghosts | Three Pound Brain. Retrieved April 30, 2014, from https://rsbakker.wordpress.com/2014/04/13/the-blind-mechanic-ii-reza-negarestani-and-the-labour-of-ghosts/

Brandom, R. 1994. Making it Explicit: Reasoning, representing, and discursive commitment. Harvard university press.

Brandom, R.  2001. Articulating Reasons: An Introduction to Inferentialism. Cambridge Mass.: Harvard University Press.

Brandom, R. 2002. Tales of the Mighty Dead: Historical Essays in the Metaphysics of Intentionality. Cambridge: Cambridge University Press.

Brandom, R. 2006. “Kantian Lessons about Mind, Meaning, and Rationality.” Southern Journal of Philosophy 44: 49–71.

Brandom, R. 2007. “Inferentialism and Some of Its Challenges.” Philosophy and Phenomenological Research 74 (3): 651–676.

Brassier, R. 2011. “The View from Nowhere.” Identities: Journal for Politics, Gender and Culture (17): 7–23.

Davidson, D.  1986. “A Nice Derangement of Epitaphs.” In Truth and Interpretation, E. LePore (ed), 433-46. Oxford: Blackwell.

Negarestani, Reza. 2014. The Labor of the Inhuman, Part I: Human | e-flux. Retrieved April 30, 2014, from http://www.e-flux.com/journal/the-labor-of-the-inhuman-part-i-human/

Negarestani, Reza. 2014. ‘The Labor of the Inhuman, Part II: The Inhuman’ | e-flux. Retrieved April 30, 2014, from http://www.e-flux.com/journal/the-labor-of-the-inhuman-part-ii-the-inhuman/

Roden, D. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281-298. London: Springer.

Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. Routledge.

Turner, S. P. 2010. Explaining the normative. Polity.

Wennemann, D. J. 2013. Posthuman Personhood. New York: University Press of America.

 

Tagged with:
 

Computer Music and Posthumanism

On June 9, 2014, in Uncategorized, by enemyin1

A possibly ill-advised idea for a presentation on computer music and posthumanism entitled “Computer Music and Posthumanism”.

I will introduce two flavors of posthumanism: critical posthumanism (CP) and speculative posthumanism (SP) and provide an overview of some of the ways in which they might be explored by thinking through philosophical issues raised by computer music practice.
CP questions the dualist modes of thinking that have traditionally assigned human subjects a privileged place within philosophical thought: for example, the distinction between the formative power of minds and subjects and the inertia of matter.
The use of computers to supplement human performance raises questions about where agency is ascribed. Is it always on the side of the human musician or can it also be ascribed also to the devices or software used to generate sound events? If so, what kind of status can be granted to such artificial agents? Does their agency locally supervene on human agency, for example? I will also argue that the intractability and complexity of some computer generated sound confronts us with the nonhuman, mind-independent reality of sonic events. It thus provides an aesthetic grounding for a posthumanist realism.
SP (by contrast) is a metaphysical possibility claim about technological successors to humans. It can be summed up in the SP Schema: “Descendants of current humans could cease to be human by virtue of a history of technical alteration” CP and SP are conceptually distinct but, I argue, the most radical form of SP converges with the anti-anthropocentrism of CP (Roden 2014). In particular, non-anthropologically bounded SP implies that the only way in which we can acquire substantive knowledge of posthumans is through making posthumans or becoming posthuman. I will argue that computer music development may have a role in this project of engineering a posthuman succession.

Roden, D. 2010b. “Sonic Art and the Nature of Sonic Events.” Review of Philosophy and Psychology 1 (1): 141–156.
Roden, D. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281-298. London: Springer.
Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. Routledge.

 

Tagged with:
 

No Future? Catherine Malabou on the Humanities

On February 19, 2014, in Uncategorized, by enemyin1

Catherine Malabou has an intriguing piece on the vexed question of the relationship between the “humanities” and science in the journal Transeuropeennes here.

It is dominated by a clear and subtle reading of Kant, Foucault and Derrida’s discussion of the meaning of Enlightenment and modernity. Malabou argues that the latter thinkers attempt to escape Kantian assumptions about human invariance by identifying the humanities with “plasticity itself”. The Humanities need not style themselves in terms of some invariant essence of humanity. They can be understood as a site of transformation and “deconstruction” as such.  Thus for Derrida in “University Without Condition”, the task of the humanities is:

the deconstruction of « what is proper to man » or to humanism. The transgression of the transcendental implies that the very notion of limit or frontier will proceed from a contingent, that is, historical, mutable, and changing deconstruction of the frontier of the « proper ».

Where, as for Foucault, the deconstruction of the human involves exhibiting its historical conditions of possibility and experimenting with these by, for example, thinking about “our ways of being, thinking, the relation to authority, relations between the sexes, the way in which we perceive insanity or illness “.

This analysis might suggest that the Humanities have little to fear from technological and scientific transformations of humans bodies or minds; they are just the setting in which the implications of these alterations are hammered out.

This line of thought reminds me of a revealingly bad argument produced by Andy Clark in his Natural Born Cyborgs:

The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game . . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, 142).

This is obviously broken-backed: that earlier bootstrapping didn’t produce posthumans doesn’t entail  that future ones won’t. Even if humans are essentially self-modifying it doesn’t follow that any prospective self-modifying entity is human.

The same problem afflicts Foucault and Derrida’s attempts to hollow out a reservation for humanities scholars by identifying them with the promulgation of transgression or deconstruction. Identifying the humanities with plasticity as such throws the portals of possibility so wide that it can only refer to an abstract possibility space whose contents and topology remains closed to us. If, with Malabou, we allow that some of these transgressions will operate on the material substrate of life, then we cannot assume that its future configurations will resemble human communities or human thinkers – thinkers concerned with topics like sex, work and death for example.

Malabou concludes with the suggestion that Foucault and Derrida fail to confront a quite different problem. They do not provide a historical explanation of the possibility of transformations of life and mind to which they refer:

They both speak of historical transformations of criticism without specifying them. I think that the event that made the plastic change of plasticity possible was for a major part the discovery of a still unheard of plasticity in the middle of the XXth century, and that has become visible and obvious only recently, i.e. the plasticity of the brain that worked in a way behind continental philosophy’s back. The transformation of the transcendental into a plastic material did not come from within the Humanities. It came precisely from the outside of the Humanities, with again, the notion of neural plasticity. I am not saying that the plasticity of the human as to be reduced to a series of neural patterns, nor that the future of the humanities consists in their becoming scientific, even if neuroscience tends to overpower the fields of human sciences (let’s think of neurolinguistics, neuropsychoanalysis, neuroaesthetics, or of neurophilosophy), I only say that the Humanities had not for the moment taken into account the fact that the brain is the only organ that grows, develops and maintains itself in changing itself, in transforming constantly its own structure and shape. We may evoke on that point a book by Norman Doidge, The Brain that changes itself. Doidge shows that this changing, self-fashioning organ is compelling us to elaborate new paradigms of transformation.

I’m happy to concede that the brain is a special case of biological plasticity, but, as Eileen Joy notes elsewhere, the suggestion that the humanities have been out of touch with scientific work on the brain is unmotivated. The engagement between the humanities (or philosophy, at least) and neuroscience already includes work as diverse as Paul and Patricia Churchland’s work on neurophilosophy and Derrida’s early writings on Freud’s Scientific Project.

I’m also puzzled by the suggestion that we need to preserve a place for transcendental thinking at all here. Our posthuman predicament consists in the realization that we are alterable configurations of matter and that our powers of self-alteration are changing in ways that put the future of human thought and communal life in doubt. This is not a transcendental claim. It’s a truistic generalisation which tells us little about the cosmic fate of an ill-assorted grab bag of  academic disciplines.

References

Clark, A. 2003. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press.

 

 

 

 

 

 

People and cultures have some non-overlapping beliefs. Some folk believe that there is a God, some that there is no God, some that there are many gods. Some people believe that personal autonomy is a paramount value, while others feel that virtues like honour and courage take precedence over personal freedom. These core beliefs are serious, in that they make a difference to whether people live or die, or are able to live the kinds of life that they wish. People fight and die for the sake of autonomy. People fight, die or (institute gang rapes) in the interests of personal honour.

Some folk – the self-styled pluralists – believe that respect for otherness is a paramount political value. Respecting otherness, they say, is so paramount that it should regulate our ontological commitments – our assumptions about what exists. I must admit that I find this hard to credit ontologically or ethically. But it is also unclear how we should spell the principle out. So I’ll consider two versions that have circulated in the blogosphere recently. The first, I will argue, teeters on incoherence or, where not incoherent, is hard to justify in ethical or political terms. The second – which demands that we build a common world – may also be incoherent, but I will argue that we have no reason to think that its ultimate goal is realisable.

According to Philip at Circling Squares Isabel Stengers and Bruno Latour think that this position should enjoin us to avoid ridiculing or undermining others’ values or ontologies. Further, that we should:

grant that all entities exist and, second, that to say that someone’s cherished idol (or whatever disputed entity they hold dear) is non-existent is a ‘declaration of war’ – ‘this means war,’ as Stengers often says.

I’ll admit that I find first part of this principle this damn puzzling. Even if we assume – for now – that it is wrong to attempt to undermine another person’s central beliefs this principle seems to require a) that people actually embrace ontological commitments that are contrary to the one’s they adhere to; b) pretend not to have one’s core beliefs; c) adopt some position of public neutrality vis a vis all core beliefs.

The first interpretation (a) results in the principle that one should embrace the contrary of every core belief; or, in effect, that no one should believe everything. So (in the interests of charity) we should pass on.

b) allows us to have beliefs so long as they are unexpressed. Depending on your view of beliefs, this is either incoherent (because there are no inexpressible beliefs) or burdens believers that no one is likely to find it acceptable.

So I take Philip to embrace c).  His clarification suggests something along these lines. For example. He claims that it is consistent with respecting otherness to say what we believe about other’s idols but not to publicly undermine their reasons for believing in them. Thus:

Their basic claim seems to be that ‘respect for otherness,’ i.e. political pluralism, can only come from granting the entities that others hold dear an ontology, even if you don’t ‘believe’ in them.  You are thus permitted to say ‘I do not follow that god, he has no hold over me’ but you are not permitted to say ‘your god is an inane, infantile, non-existent fantasy, grow up.’  And it’s not just a question of politeness (although there’s that too).  The point is to grant others’ idols and deities an existence – one needn’t agree over what that existence entails, over what capacities that entity has or what obligations it impresses upon you as someone in its partial presence but to deny it existence entirely is to ‘declare war’ – to deny the possibility of civil discourse, of pluralistic co-existence.

I must admit that I find this principle of respect puzzling as well. After all, some of my reasons for being an atheist are also reasons against being a theist. So unless this is just an innocuous plea for good manners (which I’m happy to sign up to on condition that notional others show me and mine the same forbearance) it seems to require that all believers keep their reasons for their belief to themselves. This, again, seems to demand an impossible or repugnant quietism.

So, thus far, ontological pluralism seems to be either incoherent or to impose such burdens on all believers that nobody should be required to observe it. There is, of course, a philosophical precedent for restricted ontological quietism in Rawls’ political liberalism. Rawls’ proposes that reasonable public deliberation recognize the “burdens of judgement” by omitted any justification that hinges on “comprehensive” ethical or religious doctrines over which there can be reasonable disagreement (Rawls 2005, 54). Deliberations about justice under Political Liberalism are thus constrained to be neutral towards “conflicting worldviews” so long as they are tolerant and reasonable (Habermas 1995, 119, 124-5).

However, there is an important difference between the political motivations behind Rawlsian public reason and the position of “ontological charity” Philip attributes to Stengers and Latour. Rawls’ is motivated by the need to preserve stability within plural democratic societies. Public reason does not apply outside the domain of political discourse in which reasonable citizens hash out basic principles of justice and constitutional essentials. It is also extremely problematic in itself.  Habermas  argues that Rawls exclusion of plural ethical or religious beliefs from the public court is self-vitiating because comprehensive perspectives are sources of disagreement about shared principles (for example, the legitimacy of abortion or same-sex marriage) and these must accordingly be addressed through dialogue rather than circumvented if a politically stable consensus is to be achieved (126).

Finally, apart from being incoherent, the principle of ontological charity seems unnecessary. As Levi Bryant points out in his realist retort to the pluralist, people are not the sum of their beliefs. Beliefs can be revised without effacing the believer. Thus an attack on core beliefs is not an attack on the person holding those beliefs.

So it is hard to interpret the claim that we should grant the existence of others’ “idols” as much more than the principle that it is wrong to humiliate, ridicule or insult people because of what their beliefs are. This seems like a good rule of thumb, but it is hard to justify the claim that it is an overriding principle. For example, even if  Rushdie’s Satanic Verses “insults Islam” having an open society in which aesthetic experimentation and the critical evaluation of ideas is possible is just more important than saving certain sections of it from cognitive dissonance or intellectual discomfort. Too many people have suffered death, terror and agony because others had aberrant and false core beliefs to make it plausible that these should be immune from criticism or ridicule. A little personal dissonance is a small price to pay for not going to the oven.

So what of the principle that we should build a “common world”. This is set out by Jeremy Trombley in his Struggle Forever blog under the rubric of “cosmopolitics”. Jeremy regards this project as an infinite task that requires us to seek a kind of fusion between different word views, phenomenologies and ontologies:

The project, as Latour, Stengers, James, and others have described it, is to compose a common world. What pluralism recognizes is that, in this project, we all start from different places – Latour’s relativity rather than relativism. The goal, then, (and it has to be recognized that this project is always contingent and prone to failure) is to make these different positions converge, but in a way that doesn’t impose one upon the other as the Modern Nature/Culture dichotomy tends to do. Why should we avoid imposing one on the other? In part because it’s the right thing to do – by imposing we remove or reduce the agency of the other. The claim to unmediated access to reality makes us invulnerable – no other claim has that grounding, and therefore we can never be wrong. But we are wrong – the science of the Enlightenment gave us climate change, environmental destruction, imperialism in the name of rationality (indigenous peoples removed from their land and taken to reeducation facilities where they were taught “rational” economic activities such as farming), and so on. It removed us from the world and placed us above it – the God’s eye view.

I think there a number of things wrong with cosmopolitics as Jeremy describes it here.

Firstly, seeking to alter beliefs or values does not necessarily reduce agency because people are not their beliefs.

Secondly, some worldviews – like the racist belief-systems that supported the European slave trade – just need to be imposed upon because they are bound up with violent and corrupting socio-political systems.

Thirdly, I know of no Enlightenment thinker, or realist, for whom “unmediated access to reality” is a sine qua non for knowledge. Let’s assume that “realism” is the contrary of pluralism here. It’s not clear what unmediated access would be like, but all realists are committed to the view that we we don’t have it since if we believe that reality has a mind-independent existence and nature, it can presumably vary independently of our beliefs about it. In its place, we have various doctrines of evidence and argument that are themselves susceptible to revision.  Some analyses of realism suppose that realists are committed to the claim that there is a one true account of the world (the God’s Eye View) but – as pointed out in an earlier post – this commitment is  debatable. In any case, supposing the the existence of a uniquely true theory is very different from claiming to have it.

Finally, much hinges on what we mean by a common world here. I take it that it is not the largely mind-independent reality assumed by the realist since – being largely mind-independent – it exists quite independently of any political project. So I take it that Jeremy is adverting something like a shared phenomenology or experience: a kind of fusion of horizons at the end of time. If we inflect “world” in this sense, then there is no reason for believing that such an aim is possible, let alone coherent. This possibility depends on there being structures of worldhood that are common to all beings that can be said to have one (Daseins, say). I’ve argued that there are no reasons for holding that we have access to such a priori knowledge because – like Scott Bakker - I hold that phenomenology gives us very limited insight into its nature. Thus we have no a priori grasp of what a world is and no reason to believe that Daseins (human or nonhuman) could ever participate in the same one. The argument for this is lengthy so I refer the reader to my paper “Nature’s Dark Domain” and my forthcoming book Posthuman Life.

References

Habermas, Jurgen. 1995. “Reconciliation through the Public Use of Reason: Remarks on John Rawls’s Political Liberalism.” The Journal of Philosophy 92 (3): 109–131.

Rawls, John. 2005. Political Liberalism. Columbia University Press.

 

 

 

Tagged with:
 

Putnam and Speculative Realism

On January 16, 2014, in Uncategorized, by enemyin1

Stephen Shakespeare has an interesting post over at An und für sich discussing Hilary Putnam’s argument against Metaphysical Realism and the positions of contemporary speculative realists like Meillassoux and Harman. Putnam (circa Reason Truth and History) treats Metaphysical Realism (MR) a package deal with three components: Independence (there is a fixed totality of mind-independent objects); Correspondence (there are word-world relations between bits of theories and the things to which they refer); Uniqueness (there is one true theory that correctly describes the state of these objects).

He then uses his model-theoretic argument to undermine Uniqueness. Given an epistemologically ideal theory and an interpretation function which maps that theory onto one of some totality of possible worlds, you can always come up with another mapping and hence another theory that is equally true of that world, elegant, simple, well-confirmed, etc. Unless, there is some other property that picks out a single theory as God’s Own other than its epistemic and semantic virtues, Uniqueness fails and with it MR.

Shakespeare argues that speculative realists reject the form of the independence thesis, denying that there is a fixed totality of mind-independent objects:

[Contemporary Realism] need not entail a conviction that objects in the world are a ‘fixed totality’. Objects can change or join to form new, irreducibly real objects. The lists of objects which are part of the rhetorical style of OOO encompass radically diverse things, including physical assemblages, social groups and fictional works. Each of these ‘objects’ consists of other irreducible objects and so on. There is not simply one stratum of object.

For Meillassoux, the picture is different. In one respect, the absolute consists of the fact that anything can be different for no reason: there is no founding ontological or transcendental necessity for the order of things. And this is what we can know. So his realism also does not entail that there is one fixed totality, or one complete and true description of things.

I demur partly from this analysis of where SR diverges from MR – though I’m happy to be persuaded otherwise. By “fixed” Putnam just means determinate. If there are fictional objects or sensa, then these must be part of God’s Own Theory (given MR). If there are assemblages with emergent properties, they too might click into God’s Own Ontology. Moreover, the Harmiverse has to consist of discrete, encodable objects, so it’s quite susceptible to a model-theoretic analysis of the kind that Putnam offers (See my Harman on Patterns and Harms).

Shakespeare may be right about Meillassoux’s ontology. One could argue that hyperchaos is not a thing and thus cannot be part of a model.

If we read Hyperchaos as the absolute contingency of any thinkable possibility then representing hyperchaos might seem pretty easy. Meillassoux is just saying that any non-contradictory event could occur (I will not consider whether he is justified in saying this).

So perhaps his ontology just comes down to the claim that any arbitrary, non-contradictory sentence is true in at least one possible world.

I suspect (but cannot show) that the real problem with reconciling Meillassoux’s SR with MR is in how one interprets this  modality. Saying that any arbitrary, non-contradictory sentence is true in at least possible world, is not what Meillassoux has in mind since this resembles a standard definition of de dicto contingency in possible world semantics. Moreover, Meillassoux (2010) denies we have warrant to believe that the thinkable can be totalized a priori on the grounds that set theory shows that there are always more things than can be contained in any totality. If this is right, then it is precipitate to assume a totality of all objects or a totality of all models under which God’s Own Theory could be interpreted. MR cannot even get started.

However, there are other ways in which contemporary realists (and not just speculative realists) could diverge from MR. For example, Devitt denies that realism is really committed to Uniqueness – the view that there is exactly “one true and complete description of the world” (Devitt 1984: 229). We might also demur from the assumption that the world consists of objects or only objects that enter into semantic relationships with bits of language or mind. Structural realists, for example, argue that reality is structure and that this is precisely what approximately similar theories capture – regardless of their official ontological divergences (Ladyman and Ross 2007: 94-5). Some speculative ontologies deny the Correspondence assumption, holding that the world contains entities that cannot be fully represented in any theory: e.g. powers, Deleuzean intensities.

Perhaps the Correspondence assumption just replicates the Kantian view that entities must conform to our modes of representation – in which case a robust realist should reject it in any case. This, interestingly, is where the issue of realism segues into the issues addressed in my forthcoming book Posthuman Life. For, analogously to Meillassoux’s claim about totalizing the thinkable, one can also reject the claim that we have any advance, future-proof knowledge of the forms in which reality must be “thought” If we have no access to the space of possible minds, then we can have no a priori conception of what a world must be as such.

Devitt, M. 1984. Realism and Truth. Princeton: Princeton University Press.

Meillassoux, Q. 2010. After Finitude: An Essay on the Necessity of Contingency, R. Brassier (trans). London: Continuum.

Putnam, H 1981. Reason, Truth and History, Cambridge University Press, Cambridge.

Ladyman James, Ross Don, (2007), Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press.

Tagged with:
 

A highly illuminating discussion of the place of value, meaning and purpose within a naturalistic worldview. H/p synthetic zero

Invisible Clock: semi-algorithmic improvisation

On January 13, 2014, in Uncategorized, by enemyin1

Haven’t done this in a long while. I fired up my antique version of MAX MSP and Reaper and used my java based probabilistic sequencer jDelta (code here) to belt out this short improvisation. The sound is a marimba-like tuned percussion designed on the Native Instruments FM8 synthesizer. jDelta allows you to take a short seed sequence (here a repeated cluster chord) and graphically determine the probability of individual notes playing in the sequence, transpositions, velocity or tempo changes in real time using multislider objects. I then improvised a few ornaments over algorithmic variations induced on the seed phrase.

 

 

Relevance to posthuman performance practice: JDelta is just a sequencer that allows a certain global control over event probabilities. It assigns played note values to some arrays, then determines the probability of some output related to those values by imposing conditions on a random number output. A smarter program might (for example) use Bayesian statistics or neural networks rather than random numbers to fix the probabilities of an event relative to a given musical context (a little beyond my programming ability at the moment). While the program is not remotely smart, it mediates performance by allowing one to conceive the distribution of events in a graphical way, delegating how the events fall to the machine.

Tagged with:
 

Aesthetic Excess: Ballard and Stelarc

On December 14, 2013, in Uncategorized, by enemyin1

Radical art defies and transforms collective modes of understanding. Wagner’s famous “Tristan chord” segues between classical harmony, late romanticism and twentieth century atonality due to its ambiguous relationship to its tonal context. The aesthetic value of Xenakis’ Concret Ph lies partly in the technological potentials realized subsequently in granular synthesis techniques which employ global statistical parameters to control flocks of auditory events. Such sensations are, in Brian Massumi words, “in excess over experience” – suspending practices and meanings in ways that catalyse deterritorializing movement towards non-actual futures (Massumi 2005: 136). The aesthetics of excess provides a limit case of the reflective creation of value that occurs when we modify existing modes of sense-making or embodiment. It also provides a window upon the posthuman as potentiality shadowing our interactions with technological environments.

This contingency is amplified in another radical art work,  J G Ballard’s novel Crash. As I wrote back in 1999:

In Crash the technology of the car has become the adjunct to a violent sexuality.  Its erotic focus and ideologue, Vaughan, is an ambulance chasing ex-TV presenter whose career as a glamorous ‘hoodlum scientist’ has been cut short by his disfigurement in a motorcycling accident.  Marking the parameters of vehicle collisions and casual sexual encounters with Polaroid and cine camera, Vaughan is a social being of sorts, assembling around him a crew of co-experimenters whose sexuality has been activated by ‘the perverse eroticisms of the car-crash’.  The novel’s narrator ‘James Ballard’ recounts his induction into the crashpack; first through a motorway accident, then via a succession of techno-erotic duels and excursions, culminating in Vaughan’s attempted ‘seduction’ of the actress Elizabeth Taylor in the environs of London Airport . .

It is only in so far as Vaughan ‘[mimes] the equations between the styling of a motor-car and the organic elements of his body’ (Ballard 1995: 170), modulating the symbolic requirements of Ballard’s narrative with his histrionic body, that he can remain its primary sexual focus. . .  These impersonal ‘equations’ mediate every affective relationship between the characters and Crash’s residual city of multi-storey car parks, airport termini, hermetic suburbs and motorway slip roads. They are expressed in a language of excremental objects – ‘aluminium ribbons’, Gabrielle’s thigh wound, Vaughan’s sectioned nipples, torn fenders, scars, etc. – whose very lack of quotidian function commends them as arbitrary tokens in the symbolic algebra (Roden 2002).

Crash thus construct an internally referential system of desire around sites, surfaces and interstices of late twentieth century technological landscapes (Roden 2002). But despite its contemporary setting, the novel does not describe this world: it potentiates it. Crash exhibits the contingency of human subjectivity and social relationships given its irrevocably technological condition.

 

A similar claim is made about the work of the Australian performance artist Stelarc in Massumi’s “The Evolutionary Alchemy of Reason”. Massumi argues that the content of Stelarc’s performances – such as his series of body suspensions or his hook-ups with industrial robots, prosthetic hands and compound-eye goggles – is nothing to do with the functional utility of these systems or events. They have no use. Rather their effect is to place bodies and technologies in settings where their incorporation as use-values is interrupted. Of the compound eye goggles that Stelarc created for his work Helmet no. 3: put on and walk 1970 he writes: “They extended no-need into no-utility. And they extended no-utility into ‘art’” (Massumi 2005: 131).

Stelarc’s somewhat elliptical rationale is to “extend intelligence beyond the Earth”. His performances decouple the body from its functions and from the empathic responses of observers – even when dangling from skin hooks over a city street, Stelarc never appears as suffering or abject. They register the body’s potential for “off world” environments rather than its actual functional involvements with our technological landscape. Space colonization is not a current use value or industrial application, but a project for our planned obsolesce:

The terrestrial body will be obsolete from the moment a certain subpopulation feels compelled to launch itself into an impossible, unthinkable future of space colonization. To say that the obsolescence of the body is produced is to say that it is compelled. To say that it is compelled is to say that it is “driven by desire” rather than by need or utility (151-2).

These performances embody a potential that is “unthinkable” because aesthetically disjoined from our phenomenology and world. But, as Claire Colebrook suggests, we have been incipiently “off world” since the dawn of the industrial era:

We have perhaps always lived in a time of divergent, disrupted and diffuse systems of forces, in which the role of human decisions and perceptions is a contributing factor at best. Far from being resolved by returning to the figure of the bounded globe or subject of bios rather than zoe, all those features that one might wish to criticize in the bio-political global era can only be confronted by a non-global temporality and counter-ethics (Colebrook 2012: 38).

The counter-final nature of modern technique means that the conditions under which human ethical judgements are adapted can be overwritten by systems over which we have no ultimate control. Posthumanity would be only the most extreme consequence of this ramifying technics. An ethics  bounded by the human world thus ignores its already excessive character (32).

References:

Ballard, J.G. 1995. Crash. London: Vintage.

Massumi, Brian. 2005. “The Evolutionary Alchemy of Reason: Stelarc.” In Stelarc: The Monograph, Marquand Smith (ed). MIT Press: 125-192.

Roden, DAvid 2003. “Cyborgian Subjects and the Auto-destruction of Metaphor.” In Crash Cultures: Modernity, Mediation and the Material, Jane Arthurs and Iain Grant (eds). Intellect Books: 91–102.

Colebrook, Claire 2012. “A Globe of One’s Own: In Praise of the Flat Earth.” Substance: A Review of Theory & Literary Criticism 41 (1): 30–39.

Tagged with: